Search This Blog

Saturday, June 17, 2023

Problem solving

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Problem_solving

Problem solving is the process of achieving a goal by overcoming obstacles, a frequent part of most activities. Problems in need of solutions range from simple personal tasks (e.g. how to turn on an appliance) to complex issues in business and technical fields. The former is an example of simple problem solving (SPS) addressing one issue, whereas the latter is complex problem solving (CPS) with multiple interrelated obstacles. Another classification is into well-defined problems with specific obstacles and goals, and ill-defined problems in which the current situation is troublesome but it is not clear what kind of resolution to aim for. Similarly, one may distinguish formal or fact-based problems requiring psychometric intelligence, versus socio-emotional problems which depend on the changeable emotions of individuals or groups, such as tactful behavior, fashion, or gift choices.

Solutions require sufficient resources and knowledge to attain the goal. Professionals such as lawyers, doctors, and consultants are largely problem solvers for issues which require technical skills and knowledge beyond general competence. Many businesses have found profitable markets by recognizing a problem and creating a solution: the more widespread and inconvenient the problem, the greater the opportunity to develop a scalable solution.

There are many specialized problem-solving techniques and methods in fields such as engineering, business, medicine, mathematics, computer science, philosophy, and social organization. The mental techniques to identify, analyze, and solve problems are studied in psychology and cognitive sciences. Additionally, the mental obstacles preventing people from finding solutions is a widely researched topic: problem solving impediments include confirmation bias, mental set, and functional fixedness.

Definition

The term problem solving has a slightly different meaning depending on the discipline. For instance, it is a mental process in psychology and a computerized process in computer science. There are two different types of problems: ill-defined and well-defined; different approaches are used for each. Well-defined problems have specific end goals and clearly expected solutions, while ill-defined problems do not. Well-defined problems allow for more initial planning than ill-defined problems. Solving problems sometimes involves dealing with pragmatics, the way that context contributes to meaning, and semantics, the interpretation of the problem. The ability to understand what the end goal of the problem is, and what rules could be applied represents the key to solving the problem. Sometimes the problem requires abstract thinking or coming up with a creative solution.

Psychology

Problem solving in psychology refers to the process of finding solutions to problems encountered in life. Solutions to these problems are usually situation or context-specific. The process starts with problem finding and problem shaping, where the problem is discovered and simplified. The next step is to generate possible solutions and evaluate them. Finally a solution is selected to be implemented and verified. Problems have an end goal to be reached and how you get there depends upon problem orientation (problem-solving coping style and skills) and systematic analysis. Mental health professionals study the human problem solving processes using methods such as introspection, behaviorism, simulation, computer modeling, and experiment. Social psychologists look into the person-environment relationship aspect of the problem and independent and interdependent problem-solving methods.[6] Problem solving has been defined as a higher-order cognitive process and intellectual function that requires the modulation and control of more routine or fundamental skills.

Problem solving has two major domains: mathematical problem solving and personal problem solving. Both are seen in terms of some difficulty or barrier that is encountered. Empirical research shows many different strategies and factors influence everyday problem solving. Rehabilitation psychologists studying individuals with frontal lobe injuries have found that deficits in emotional control and reasoning can be re-mediated with effective rehabilitation and could improve the capacity of injured persons to resolve everyday problems. Interpersonal everyday problem solving is dependent upon the individual personal motivational and contextual components. One such component is the emotional valence of "real-world" problems and it can either impede or aid problem-solving performance. Researchers have focused on the role of emotions in problem solving, demonstrating that poor emotional control can disrupt focus on the target task and impede problem resolution and likely lead to negative outcomes such as fatigue, depression, and inertia. In conceptualization, human problem solving consists of two related processes: problem orientation and the motivational/attitudinal/affective approach to problematic situations and problem-solving skills. Studies conclude people's strategies cohere with their goals and stem from the natural process of comparing oneself with others.

Cognitive sciences

Among the first experimental psychologists to study problem solving were the Gestaltists in Germany, e.g., Karl Duncker in The Psychology of Productive Thinking (1935). Perhaps best known is the work of Allen Newell and Herbert A. Simon.

Experiments the 1960s and early 1970s asked participants to solve relatively simple, well-defined, but not previously seen laboratory tasks. These simple problems, such as the Tower of Hanoi, admitted optimal solutions which could be found quickly, allowing observation of the full problem-solving process. Researchers assumed that these model problems would elicit the characteristic cognitive processes by which more complex "real world" problems are solved.

An outstanding problem solving technique found by this research is the principle of decomposition.

Computer science

Much of computer science and artificial intelligence involves designing automatic systems to solve a specified type of problem: to accept input data and calculate a correct or adequate response, reasonably quickly. Algorithms are recipes or instructions that direct such systems, written into computer programs.

Steps for designing such systems include problem determination, heuristics, root cause analysis, de-duplication, analysis, diagnosis, and repair. Analytic techniques include linear and nonlinear programming, queuing systems, and simulation. A large, perennial obstacle is to find and fix errors in computer programs: debugging.

Logic

Formal logic is concerned with such issues as validity, truth, inference, argumentation and proof. In a problem-solving context, it can be used to formally represent a problem as a theorem to be proved, and to represent the knowledge needed to solve the problem as the premises to be used in a proof that the problem has a solution. The use of computers to prove mathematical theorems using formal logic emerged as the field of automated theorem proving in the 1950s. It included the use of heuristic methods designed to simulate human problem solving, as in the Logic Theory Machine, developed by Allen Newell, Herbert A. Simon and J. C. Shaw, as well as algorithmic methods such as the resolution principle developed by John Alan Robinson.

In addition to its use for finding proofs of mathematical theorems, automated theorem-proving has also been used for program verification in computer science. However, already in 1958, John McCarthy proposed the advice taker, to represent information in formal logic and to derive answers to questions using automated theorem-proving. An important step in this direction was made by Cordell Green in 1969, using a resolution theorem prover for question-answering and for such other applications in artificial intelligence as robot planning.

The resolution theorem-prover used by Cordell Green bore little resemblance to human problem solving methods. In response to criticism of his approach, emanating from researchers at MIT, Robert Kowalski developed logic programming and SLD resolution, which solves problems by problem decomposition. He has advocated logic for both computer and human problem solving and computational logic to improve human thinking

Engineering

Problem solving is used when products or processes fail, so corrective action can be taken to prevent further failures. It can also be applied to a product or process prior to an actual failure event—when a potential problem can be predicted and analyzed, and mitigation applied to prevent the problem. Techniques such as failure mode and effects analysis can proactively reduce the likelihood of problems.

In either case, it is necessary to build a causal explanation through a process of diagnosis. Staat summarizes the derivation of explanation through diagnosis as follows: In deriving an explanation of effects in terms of causes, abduction plays the role of generating new ideas or hypotheses (asking “how?”); deduction functions as evaluating and refining the hypotheses based on other plausible premises (asking “why?”); and induction is justifying of the hypothesis with empirical data (asking “how much?”). The objective of abduction is to determine which hypothesis or proposition to test, not which one to adopt or assert. In the Peircean logical system, the logic of abduction and deduction contribute to our conceptual understanding of a phenomenon, while the logic of induction adds quantitative details (empirical substantiation) to our conceptual knowledge.

Forensic engineering is an important technique of failure analysis that involves tracing product defects and flaws. Corrective action can then be taken to prevent further failures.

Reverse engineering attempts to discover the original problem-solving logic used in developing a product by taking it apart.

Military science

In military science, problem solving is linked to the concept of "end-states", the condition or situation which is the aim of the strategy. Ability to solve problems is important at any military rank, but is essential at the command and control level, where it results from deep qualitative and quantitative understanding of possible scenarios. Effectiveness is evaluation of results, whether the goal was accomplished. Planning is the process of determining how to achieve the goal.

Processes

Some models of problem solving involve identifying a goal and then a sequence of subgoals towards achieving this goal. Andersson, who introduced the ACT-R model of cognition, modelled this collection of goals and subgoals as a goal stack, where the mind contains a stack of goals and subgoals to be completed with a single task being carried out at any time.

It has been observed that knowledge of how to solve one problem can be applied to another problem, in a process known as transfer.

Problem-solving strategies

Problem-solving strategies are steps to overcoming the obstacles to achieving a goal, the "problem-solving cycle".

Common steps in this cycle include recognizing the problem, defining it, developing a strategy to fix it, organizing knowledge and resources available, monitoring progress, and evaluating the effectiveness of the solution. Once a solution is achieved, another problem usually arises, and the cycle starts again.

Insight is the sudden aha! solution to a problem, the birth of a new idea to simplify a complex situation. Solutions found through insight are often more incisive than those from step-by-step analysis. A quick solution process requires insight to select productive moves at different stages of the problem-solving cycle. Unlike Newell and Simon's formal definition of a move problem, there is no consensus definition of an insight problem.

Some problem-solving strategies include:

  • Abstraction: solving the problem in a tractable model system to gain insight into the real system
  • Analogy: adapting the solution to a previous problem which has similar features or mechanisms
  • Brainstorming: (especially among groups of people) suggesting a large number of solutions or ideas and combining and developing them until an optimum solution is found
  • Critical thinking
  • Divide and conquer: breaking down a large, complex problem into smaller, solvable problems
  • Hypothesis testing: assuming a possible explanation to the problem and trying to prove (or, in some contexts, disprove) the assumption
  • Lateral thinking: approaching solutions indirectly and creatively
  • Means-ends analysis: choosing an action at each step to move closer to the goal
  • Morphological analysis: assessing the output and interactions of an entire system
  • Proof of impossibility: try to prove that the problem cannot be solved. The point where the proof fails will be the starting point for solving it
  • Reduction: transforming the problem into another problem for which solutions exist
  • Research: employing existing ideas or adapting existing solutions to similar problems
  • Root cause analysis: identifying the cause of a problem
  • Trial-and-error: testing possible solutions until the right one is found
  • Help-seeking

Problem-solving methods

Common barriers

Common barriers to problem solving are mental constructs that impede an efficient search for solutions. Five of the most common identified by researchers are: confirmation bias, mental set, functional fixedness, unnecessary constraints, and irrelevant information.

Confirmation bias

Confirmation bias is an unintentional tendency to collect and use data which favors preconceived notions. Such notions may be incidental rather than motivated by important personal beliefs: the desire to be right may be sufficient motivation. Research has found that scientific and technical professionals also experience confirmation bias.

Andreas Hergovich, Reinhard Schott, and Christoph Burger's experiment conducted online, for instance, suggested that professionals within the field of psychological research are likely to view scientific studies that agree with their preconceived notions more favorably than clashing studies. According to Raymond Nickerson, one can see the consequences of confirmation bias in real-life situations, which range in severity from inefficient government policies to genocide. Nickerson argued that those who killed people accused of witchcraft demonstrated confirmation bias with motivation. Researcher Michael Allen found evidence for confirmation bias with motivation in school children who worked to manipulate their science experiments to produce favorable results.

However, confirmation bias does not necessarily require motivation. In 1960, Peter Cathcart Wason conducted an experiment in which participants first viewed three numbers and then created a hypothesis that proposed a rule that could have been used to create that triplet of numbers. When testing their hypotheses, participants tended to only create additional triplets of numbers that would confirm their hypotheses, and tended not to create triplets that would negate or disprove their hypotheses.

Mental set

Mental set is the inclination to re-use a previously successful solution, rather than search for new and better solutions. It is a reliance on habit.

It was first articulated by Abraham Luchins in the 1940s with his well-known water jug experiments. Participants were asked to fill one jug with a specific amount of water using other jugs with different maximum capacities. After Luchins gave a set of jug problems that could all be solved by a single technique, he then introduced a problem that could be solved by the same technique, but also by a novel and simpler method. His participants tended to use the accustomed technique, oblivious of the simpler alternative. This was again demonstrated in Norman Maier's 1931 experiment, which challenged participants to solve a problem by using a familiar tool (pliers) in an unconventional manner. Participants were often unable to view the object in a way that strayed from its typical use, a type of mental set known as functional fixedness (see the following section).

Rigidly clinging to a mental set is called fixation, which can deepen to an obsession or preoccupation with attempted strategies that are repeatedly unsuccessful. In the late 1990s, researcher Jennifer Wiley found that professional expertise in a field can create a mental set, perhaps leading to fixation.

Groupthink, where each individual takes on the mindset of the rest of the group, can produce and exacerbate mental set. Social pressure leads to everybody thinking the same thing and reaching the same conclusions.

Functional fixedness

Functional fixedness is the tendency to view an object as having only one function, unable to conceive of any novel use, as in the Maier pliers experiment above. Functional fixedness is a specific form of mental set, and is one of the most common forms of cognitive bias in daily life.

Tim German and Clark Barrett describe this barrier: "subjects become 'fixed' on the design function of the objects, and problem solving suffers relative to control conditions in which the object's function is not demonstrated." Their research found that young children's limited knowledge of an object's intended function reduces this barrier Research has also discovered functional fixedness in many educational instances, as an obstacle to understanding. Furio, Calatayud, Baracenas, and Padilla stated: "... functional fixedness may be found in learning concepts as well as in solving chemistry problems."

As an example, imagine a man wants to kill a bug in his house, but the only thing at hand is a can of air freshener. He may start searching for something to kill the bug instead of squashing it with the can, thinking only of its main function of deodorizing.

There are several hypotheses in regards to how functional fixedness relates to problem solving. It may waste time, delaying or entirely preventing the correct use of a tool.

Unnecessary constraints

Unnecessary constraints are arbitrary boundaries imposed unconsciously on the task at hand, which foreclose a productive avenue of solution. The solver may become fixated on only one type of solution, as if it were an inevitable requirement of the problem. Typically, this combines with mental set, clinging to a previously successful method.

Visual problems can also produce mentally invented constraints. A famous example is the dot problem: nine dots arranged in a three-by-three grid pattern must be connected by drawing four straight line segments, without lifting pen from paper or backtracking along a line. The subject typically assumes the pen must stay within the outer square of dots, but the solution requires lines continuing beyond this frame, and researchers have found a 0% solution rate within a brief allotted time.

This problem has produced the expression "think outside the box". Such problems are typically solved via a sudden insight which leaps over the mental barriers, often after long toil against them. This can be difficult depending on how the subject has structured the problem in their mind, how they draw on past experiences, and how well they juggle this information in their working memory. In the example, envisioning the dots connected outside the framing square requires visualizing an unconventional arrangement, a strain on working memory.

Irrelevant information

Irrelevant information is a specification or data presented in a problem that is unrelated to the solution. If the solver assumes that all information presented needs to be used, this often derails the problem solving process, making relatively simple problems much harder.

For example: "Fifteen percent of the people in Topeka have unlisted telephone numbers. You select 200 names at random from the Topeka phone book. How many of these people have unlisted phone numbers?" The "obvious" answer is 15%, but in fact none of the unlisted people would be listed among the 200. This kind of "trick question" is often used in aptitude tests or cognitive evaluations. Though not inherently difficult, they require independent thinking that is not necessarily common. Mathematical word problems often include irrelevant qualitative or numerical information as an extra challenge.

Avoiding barriers by changing problem representation

The disruption caused by the above cognitive biases can depend on how the information is represented: visually, verbally, or mathematically. A classic example is the Buddhist monk problem:

A Buddhist monk begins at dawn one day walking up a mountain, reaches the top at sunset, meditates at the top for several days until one dawn when he begins to walk back to the foot of the mountain, which he reaches at sunset. Making no assumptions about his starting or stopping or about his pace during the trips, prove that there is a place on the path which he occupies at the same hour of the day on the two separate journeys.

The problem cannot be addressed in a verbal context, trying to describe the monk's progress on each day. It becomes much easier when the paragraph is represented mathematically by a function: one visualizes a graph whose horizontal axis is time of day, and whose vertical axis shows the monk's position (or altitude) on the path at each time. Superimposing the two journey curves, which traverse opposite diagonals of a rectangle, one sees they must cross each other somewhere. The visual representation by graphing has resolved the difficulty.

Similar strategies can often improve problem solving on tests.

Other barriers for individuals

Individual humans engaged in problem-solving tend to overlook subtractive changes, including those that are critical elements of efficient solutions. This tendency to solve by first, only or mostly creating or adding elements, rather than by subtracting elements or processes is shown to intensify with higher cognitive loads such as information overload.

Dreaming: problem-solving without waking consciousness

Problem solving can also occur without waking consciousness. There are many reports of scientists and engineers who solved problems in their dreams. Elias Howe, inventor of the sewing machine, figured out the structure of the bobbin from a dream.

The chemist August Kekulé was considering how benzene arranged its six carbon and hydrogen atoms. Thinking about the problem, he dozed off, and dreamt of dancing atoms that fell into a snakelike pattern, which led him to discover the benzene ring. As Kekulé wrote in his diary,

One of the snakes seized hold of its own tail, and the form whirled mockingly before my eyes. As if by a flash of lightning I awoke; and this time also I spent the rest of the night in working out the consequences of the hypothesis.

There also are empirical studies of how people can think consciously about a problem before going to sleep, and then solve the problem with a dream image. Dream researcher William C. Dement told his undergraduate class of 500 students that he wanted them to think about an infinite series, whose first elements were OTTFF, to see if they could deduce the principle behind it and to say what the next elements of the series would be. He asked them to think about this problem every night for 15 minutes before going to sleep and to write down any dreams that they then had. They were instructed to think about the problem again for 15 minutes when they awakened in the morning.

The sequence OTTFF is the first letters of the numbers: one, two, three, four, five. The next five elements of the series are SSENT (six, seven, eight, nine, ten). Some of the students solved the puzzle by reflecting on their dreams. One example was a student who reported the following dream:

I was standing in an art gallery, looking at the paintings on the wall. As I walked down the hall, I began to count the paintings: one, two, three, four, five. As I came to the sixth and seventh, the paintings had been ripped from their frames. I stared at the empty frames with a peculiar feeling that some mystery was about to be solved. Suddenly I realized that the sixth and seventh spaces were the solution to the problem!

With more than 500 undergraduate students, 87 dreams were judged to be related to the problems students were assigned (53 directly related and 34 indirectly related). Yet of the people who had dreams that apparently solved the problem, only seven were actually able to consciously know the solution. The rest (46 out of 53) thought they did not know the solution.

Mark Blechner conducted this experiment and obtained results similar to Dement's. He found that while trying to solve the problem, people had dreams in which the solution appeared to be obvious from the dream, but it was rare for the dreamers to realize how their dreams had solved the puzzle. Coaxing or hints did not get them to realize it, although once they heard the solution, they recognized how their dream had solved it. For example, one person in that OTTFF experiment dreamed:

There is a big clock. You can see the movement. The big hand of the clock was on the number six. You could see it move up, number by number, six, seven, eight, nine, ten, eleven, twelve. The dream focused on the small parts of the machinery. You could see the gears inside.

In the dream, the person counted out the next elements of the series – six, seven, eight, nine, ten, eleven, twelve – yet he did not realize that this was the solution of the problem. His sleeping mindbrain solved the problem, but his waking mindbrain was not aware how.

Albert Einstein believed that much problem solving goes on unconsciously, and the person must then figure out and formulate consciously what the mindbrain has already solved. He believed this was his process in formulating the theory of relativity: "The creator of the problem possesses the solution." Einstein said that he did his problem-solving without words, mostly in images. "The words or the language, as they are written or spoken, do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be 'voluntarily' reproduced and combined."

Cognitive sciences: two schools

In cognitive sciences, researchers' realization that problem-solving processes differ across knowledge domains and across levels of expertise and that, consequently, findings obtained in the laboratory cannot necessarily generalize to problem-solving situations outside the laboratory, has led to an emphasis on real-world problem solving since the 1990s. This emphasis has been expressed quite differently in North America and Europe, however. Whereas North American research has typically concentrated on studying problem solving in separate, natural knowledge domains, much of the European research has focused on novel, complex problems, and has been performed with computerized scenarios.

Europe

In Europe, two main approaches have surfaced, one initiated by Donald Broadbent in the United Kingdom and the other one by Dietrich Dörner in Germany. The two approaches share an emphasis on relatively complex, semantically rich, computerized laboratory tasks, constructed to resemble real-life problems. The approaches differ somewhat in their theoretical goals and methodology, however. The tradition initiated by Broadbent emphasizes the distinction between cognitive problem-solving processes that operate under awareness versus outside of awareness, and typically employs mathematically well-defined computerized systems. The tradition initiated by Dörner, on the other hand, has an interest in the interplay of the cognitive, motivational, and social components of problem solving, and utilizes very complex computerized scenarios that contain up to 2,000 highly interconnected variables.

North America

In North America, initiated by the work of Herbert A. Simon on "learning by doing" in semantically rich domains, researchers began to investigate problem solving separately in different natural knowledge domains – such as physics, writing, or chess playing – thus relinquishing their attempts to extract a global theory of problem solving. Instead, these researchers have frequently focused on the development of problem solving within a certain domain, that is on the development of expertise.

Areas that have attracted rather intensive attention in North America include:

  • Reading
  • Writing
  • Calculation
  • Political decision making
  • Managerial problem solving
  • Lawyers' reasoning
  • Mechanical problem solving
  • Problem solving in electronics
  • Computer skills
  • Game playing
  • Personal problem solving
  • Mathematical problem solving
  • Social problem solving
  • Problem solving for innovations and inventions: TRIZ

Characteristics of complex problems

Complex problem solving (CPS) is distinguishable from simple problem solving (SPS). When dealing with SPS there is a singular and simple obstacle in the way. But CPS comprises one or more obstacles at a time. In a real-life example, a surgeon at work has far more complex problems than an individual deciding what shoes to wear. As elucidated by Dietrich Dörner, and later expanded upon by Joachim Funke, complex problems have some typical characteristics as follows:

  • Complexity (large numbers of items, interrelations and decisions)
  • enumerability
  • heterogeneity
  • connectivity (hierarchy relation, communication relation, allocation relation)
  • Dynamics (time considerations)
  • Intransparency (lack of clarity of the situation)
    • commencement opacity
    • continuation opacity
  • Polytely (multiple goals)
    • inexpressivenes
    • opposition
    • transience

Collective problem solving

Problem solving is applied on many different levels − from the individual to the civilizational. Collective problem solving refers to problem solving performed collectively.

Social issues and global issues can typically only be solved collectively.

It has been noted that the complexity of contemporary problems has exceeded the cognitive capacity of any individual and requires different but complementary expertise and collective problem solving ability.

Collective intelligence is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals.

Collaborative problem solving is about people working together face-to-face or in online workspaces with a focus on solving real world problems. These groups are made up of members that share a common concern, a similar passion, and/or a commitment to their work. Members are willing to ask questions, wonder, and try to understand common issues. They share expertise, experiences, tools, and methods. These groups can be assigned by instructors, or may be student regulated based on the individual student needs. The groups, or group members, may be fluid based on need, or may only occur temporarily to finish an assigned task. They may also be more permanent in nature depending on the needs of the learners. All members of the group must have some input into the decision-making process and have a role in the learning process. Group members are responsible for the thinking, teaching, and monitoring of all members in the group. Group work must be coordinated among its members so that each member makes an equal contribution to the whole work. Group members must identify and build on their individual strengths so that everyone can make a significant contribution to the task. Collaborative groups require joint intellectual efforts between the members and involve social interactions to solve problems together. The knowledge shared during these interactions is acquired during communication, negotiation, and production of materials. Members actively seek information from others by asking questions. The capacity to use questions to acquire new information increases understanding and the ability to solve problems. Collaborative group work has the ability to promote critical thinking skills, problem solving skills, social skills, and self-esteem. By using collaboration and communication, members often learn from one another and construct meaningful knowledge that often leads to better learning outcomes than individual work.

In a 1962 research report, Douglas Engelbart linked collective intelligence to organizational effectiveness, and predicted that pro-actively 'augmenting human intellect' would yield a multiplier effect in group problem solving: "Three people working together in this augmented mode [would] seem to be more than three times as effective in solving a complex problem as is one augmented person working alone".

Henry Jenkins, a key theorist of new media and media convergence draws on the theory that collective intelligence can be attributed to media convergence and participatory culture. He criticizes contemporary education for failing to incorporate online trends of collective problem solving into the classroom, stating "whereas a collective intelligence community encourages ownership of work as a group, schools grade individuals". Jenkins argues that interaction within a knowledge community builds vital skills for young people, and teamwork through collective intelligence communities contributes to the development of such skills.

Collective impact is the commitment of a group of actors from different sectors to a common agenda for solving a specific social problem, using a structured form of collaboration.

After World War II the UN, the Bretton Woods organization and the WTO were created; collective problem solving on the international level crystallized around these three types of organizations from the 1980s onward. As these global institutions remain state-like or state-centric it has been called unsurprising that these continue state-like or state-centric approaches to collective problem-solving rather than alternative ones.

Crowdsourcing is a process of accumulating the ideas, thoughts or information from many independent participants, with aim to find the best solution for a given challenge. Modern information technologies allow for massive number of subjects to be involved as well as systems of managing these suggestions that provide good results. With the Internet a new capacity for collective, including planetary-scale, problem solving was created.

Human-based computation

From Wikipedia, the free encyclopedia

Human-based computation (HBC), human-assisted computation, ubiquitous human computing or distributed thinking (by analogy to distributed computing) is a computer science technique in which a machine performs its function by outsourcing certain steps to humans, usually as microwork. This approach uses differences in abilities and alternative costs between humans and computer agents to achieve symbiotic human–computer interaction. For computationally difficult tasks such as image recognition, human-based computation plays a central role in training Deep Learning-based Artificial Intelligence systems. In this case, human-based computation has been referred to as human-aided artificial intelligence.

In traditional computation, a human employs a computer to solve a problem; a human provides a formalized problem description and an algorithm to a computer, and receives a solution to interpret. Human-based computation frequently reverses the roles; the computer asks a person or a large group of people to solve a problem, then collects, interprets, and integrates their solutions. This turns hybrid networks of humans and computers into "large scale distributed computing networks" where code is partially executed in human brains and on silicon based processors.

Early work

Human-based computation (apart from the historical meaning of "computer") research has its origins in the early work on interactive evolutionary computation (EC). The idea behind interactive evolutionary algorithms is due to Richard Dawkins. In the Biomorphs software accompanying his book The Blind Watchmaker (Dawkins, 1986) the preference of a human experimenter is used to guide the evolution of two-dimensional sets of line segments. In essence, this program asks a human to be the fitness function of an evolutionary algorithm, so that the algorithm can use human visual perception and aesthetic judgment to do something that a normal evolutionary algorithm cannot do. However, it is difficult to get enough evaluations from a single human if we want to evolve more complex shapes. Victor Johnston and Karl Sims[12] extended this concept by harnessing the power of many people for fitness evaluation (Caldwell and Johnston, 1991; Sims, 1991). As a result, their programs could evolve beautiful faces and pieces of art appealing to public. These programs effectively reversed the common interaction between computers and humans. In these programs, the computer is no longer an agent of its user, but instead, a coordinator aggregating efforts of many human evaluators. These and other similar research efforts became the topic of research in aesthetic selection or interactive evolutionary computation (Takagi, 2001), however the scope of this research was limited to outsourcing evaluation and, as a result, it was not fully exploring the full potential of the outsourcing.

A concept of the automatic Turing test pioneered by Moni Naor (1996) is another precursor of human-based computation. In Naor's test, the machine can control the access of humans and computers to a service by challenging them with a natural language processing (NLP) or computer vision (CV) problem to identify humans among them. The set of problems is chosen in a way that they have no algorithmic solution that is both effective and efficient at the moment. If it existed, such an algorithm could be easily performed by a computer, thus defeating the test. In fact, Moni Naor was modest by calling this an automated Turing test. The imitation game described by Alan Turing (1950) didn't propose using CV problems. It was only proposing a specific NLP task, while the Naor test identifies and explores a large class of problems, not necessarily from the domain of NLP, that could be used for the same purpose in both automated and non-automated versions of the test.

Finally, Human-based genetic algorithm (HBGA) encourages human participation in multiple different roles. Humans are not limited to the role of evaluator or some other predefined role, but can choose to perform a more diverse set of tasks. In particular, they can contribute their innovative solutions into the evolutionary process, make incremental changes to existing solutions, and perform intelligent recombination. In short, HBGA allows humans to participate in all operations of a typical genetic algorithm. As a result of this, HBGA can process solutions for which there are no computational innovation operators available, for example, natural languages. Thus, HBGA obviated the need for a fixed representational scheme that was a limiting factor of both standard and interactive EC. These algorithms can also be viewed as novel forms of social organization coordinated by a computer, according to Alex Kosorukoff and David Goldberg.

Classes of human-based computation

Human-based computation methods combine computers and humans in different roles. Kosorukoff (2000) proposed a way to describe division of labor in computation, that groups human-based methods into three classes. The following table uses the evolutionary computation model to describe four classes of computation, three of which rely on humans in some role. For each class, a representative example is shown. The classification is in terms of the roles (innovation or selection) performed in each case by humans and computational processes. This table is a slice of three-dimensional table. The third dimension defines if the organizational function is performed by humans or a computer. Here it is assumed to be performed by a computer.

Division of labor in computation

Innovation agent
Computer Human
Selection
agent
Computer Genetic algorithm Computerized tests
Human Interactive genetic algorithm Human-based genetic algorithm

Classes of human-based computation from this table can be referred by two-letter abbreviations: HC, CH, HH. Here the first letter identifies the type of agents performing innovation, the second letter specifies the type of selection agents. In some implementations (wiki is the most common example), human-based selection functionality might be limited, it can be shown with small h.

Methods of human-based computation

  • (HC) Darwin (Vyssotsky, Morris, McIlroy, 1961) and Core War (Jones, Dewdney 1984) These are games where several programs written by people compete in a tournament (computational simulation) in which fittest programs will survive. Authors of the programs copy, modify, and recombine successful strategies to improve their chances of winning.
  • (CH) Interactive EC (Dawkins, 1986; Caldwell and Johnston, 1991; Sims, 1991) IEC enables the user to create an abstract drawing only by selecting his/her favorite images, so human only performs fitness computation and software performs innovative role. [Unemi 1998] Simulated breeding style introduces no explicit fitness, just selection, which is easier for humans.
  • (HH2) Wiki (Cunningham, 1995) enabled editing the web content by multiple users, i.e. supported two types of human-based innovation (contributing new page and its incremental edits). However, the selection mechanism was absent until 2002, when wiki has been augmented with a revision history allowing for reversing of unhelpful changes. This provided means for selection among several versions of the same page and turned wiki into a tool supporting collaborative content evolution (would be classified as human-based evolution strategy in EC terms).
  • (HH3) Human-based genetic algorithm (Kosorukoff, 1998) uses both human-based selection and three types of human-based innovation (contributing new content, mutation, and recombination). Thus, all operators of a typical genetic algorithm are outsourced to humans (hence the origin of human-based). This idea is extended to integrating crowds with genetic algorithm to study creativity in 2011.
  • (HH1) Social search applications accept contributions from users and attempt to use human evaluation to select the fittest contributions that get to the top of the list. These use one type of human-based innovation. Early work was done in the context of HBGA. Digg and Reddit are recently popular examples. See also Collaborative filtering.
  • (HC) Computerized tests. A computer generates a problem and presents it to evaluate a user. For example, CAPTCHA tells human users from computer programs by presenting a problem that is supposedly easy for a human and difficult for a computer. While CAPTCHAs are effective security measures for preventing automated abuse of online services, the human effort spent solving them is otherwise wasted. The reCAPTCHA system makes use of these human cycles to help digitize books by presenting words from scanned old books that optical character recognition cannot decipher.
  • (HC) Interactive online games: These are programs that extract knowledge from people in an entertaining way.
  • (HC) "Human Swarming" or "Social Swarming". The UNU platform for human swarming establishes real-time closed-loop systems around groups of networked users molded after biological swarms, enabling human participants to behave as a unified collective intelligence.
  • (NHC) Natural Human Computation involves leveraging existing human behavior to extract computationally significant work without disturbing that behavior. NHC is distinguished from other forms of human-based computation in that rather than involving outsourcing computational work to human activity by asking humans to perform novel computational tasks, it involves taking advantage of previously unnoticed computational significance in existing behavior.

Incentives to participation

In different human-based computation projects people are motivated by one or more of the following.

  • Receiving a fair share of the result
  • Direct monetary compensation (e.g. in Amazon Mechanical Turk, ChaCha Search guide, Mahalo.com Answers members)
  • Opportunity to participate in the global information economy
  • Desire to diversify their activity (e.g. "people aren't asked in their daily lives to be creative")
  • Esthetic satisfaction
  • Curiosity, desire to test if it works
  • Volunteerism, desire to support a cause of the project
  • Reciprocity, exchange, mutual help
  • Desire to be entertained with the competitive or cooperative spirit of a game
  • Desire to communicate and share knowledge
  • Desire to share a user innovation to see if someone else can improve on it
  • Desire to game the system and influence the final result
  • Fun
  • Increasing online reputation/recognition

Many projects had explored various combinations of these incentives. See more information about motivation of participants in these projects in Kosorukoff,[36] and Von Hippel.[37]

Human-based computation as a form of social organization

Viewed as a form of social organization, human-based computation often surprisingly turns out to be more robust and productive than traditional organizations. The latter depend on obligations to maintain their more or less fixed structure, be functional and stable. Each of them is similar to a carefully designed mechanism with humans as its parts. However, this limits the freedom of their human employees and subjects them to various kinds of stresses. Most people, unlike mechanical parts, find it difficult to adapt to some fixed roles that best fit the organization. Evolutionary human-computation projects offer a natural solution to this problem. They adapt organizational structure to human spontaneity, accommodate human mistakes and creativity, and utilize both in a constructive way. This leaves their participants free from obligations without endangering the functionality of the whole, making people happier. There are still some challenging research problems that need to be solved before we can realize the full potential of this idea.

The algorithmic outsourcing techniques used in human-based computation are much more scalable than the manual or automated techniques used to manage outsourcing traditionally. It is this scalability that allows to easily distribute the effort among thousands of participants. It was suggested recently that this mass outsourcing is sufficiently different from traditional small-scale outsourcing to merit a new name crowdsourcing. However, others have argued that crowdsourcing ought to be distinguished from true human-based computation. Crowdsourcing does indeed involve the distribution of computation tasks across a number of human agents, but Michelucci argues that this is not sufficient for it to be considered human computation. Human computation requires not just that a task be distributed across different agents, but also that the set of agents across which the task is distributed be mixed: some of them must be humans, but others must be traditional computers. It is this mixture of different types of agents in a computational system that gives human-based computation its distinctive character. Some instances of crowdsourcing do indeed meet this criterion, but not all of them do.

Human Computation organizes workers through a task market with APIs, task prices, and software-as-a-service protocols that allow employers / requesters to receive data produced by workers directly in to IT systems. As a result, many employers attempt to manage worker automatically through algorithms rather than responding to workers on a case-by-case basis or addressing their concerns. Responding to workers is difficult to scale to the employment levels enabled by human computation microwork platforms. Workers in the system Mechanical Turk, for example, have reported that human computation employers can be unresponsive to their concerns and needs.

Applications

Human assistance can be helpful in solving any AI-complete problem, which by definition is a task which is infeasible for computers to do but feasible for humans. Specific practical applications include:

Criticism

Human-based computation has been criticized as exploitative and deceptive with the potential to undermine collective action.

In social philosophy it has been argued that human-based computation is an implicit form of online labour. The philosopher Rainer Mühlhoff distinguishes five different types of "machinic capture" of human microwork in "hybrid human-computer networks": (1) gamification, (2) "trapping and tracking" (e.g. CAPTCHAs or click-tracking in Google search), (3) social exploitation (e.g. tagging faces on Facebook), (4) information mining and (5) click-work (such as on Amazon Mechanical Turk). Mühlhoff argues that human-based computation often feeds into Deep Learning-based Artificial Intelligence systems, a phenomenon he analyzes as "human-aided artificial intelligence".

Microbial intelligence

From Wikipedia, the free encyclopedia

Microbial intelligence (known as bacterial intelligence) is the intelligence shown by microorganisms. The concept encompasses complex adaptive behavior shown by single cells, and altruistic or cooperative behavior in populations of like or unlike cells mediated by chemical signalling that induces physiological or behavioral changes in cells and influences colony structures.

Complex cells, like protozoa or algae, show remarkable abilities to organize themselves in changing circumstances. Shell-building by amoebae reveals complex discrimination and manipulative skills that are ordinarily thought to occur only in multicellular organisms.

Even bacteria can display more behavior as a population. These behaviors occur in single species populations, or mixed species populations. Examples are colonies or swarms of myxobacteria, quorum sensing, and biofilms.

It has been suggested that a bacterial colony loosely mimics a biological neural network. The bacteria can take inputs in form of chemical signals, process them and then produce output chemicals to signal other bacteria in the colony.

Bacteria communication and self-organization in the context of network theory has been investigated by Eshel Ben-Jacob research group at Tel Aviv University which developed a fractal model of bacterial colony and identified linguistic and social patterns in colony lifecycle.

Examples of microbial intelligence

Bacterial

  • Bacterial biofilms can emerge through the collective behavior of thousands or millions of cells
  • Biofilms formed by Bacillus subtilis can use electric signals (ion transmission) to synchronize growth so that the innermost cells of the biofilm do not starve.
  • Under nutritional stress bacterial colonies can organize themselves in such a way so as to maximize nutrient availability.
  • Bacteria reorganize themselves under antibiotic stress.
  • Bacteria can swap genes (such as genes coding antibiotic resistance) between members of mixed species colonies.
  • Individual cells of myxobacteria coordinate to produce complex structures or move as social entities. Myxobacteria move and feed cooperatively in predatory groups, known as swarms or wolf packs, with multiple forms of signalling and several polysaccharides play an important role.
  • Populations of bacteria use quorum sensing to judge their own densities and change their behaviors accordingly. This occurs in the formation of biofilms, infectious disease processes, and the light organs of bobtail squid.
  • For any bacterium to enter a host's cell, the cell must display receptors to which bacteria can adhere and be able to enter the cell. Some strains of E. coli are able to internalize themselves into a host's cell even without the presence of specific receptors as they bring their own receptor to which they then attach and enter the cell.
  • Under nutrient limitation, some bacteria transform into endospores to resist heat and dehydration.
  • A huge array of microorganisms have the ability to overcome being recognized by the immune system as they change their surface antigens so that any defense mechanisms directed against previously present antigens are now useless with the newly expressed ones.
  • In April 2020 it was reported that collectives of bacteria have a membrane potential-based form of working memory. When scientists shone light onto a biofilm of bacteria optical imprints lasted for hours after the initial stimulus as the light-exposed cells responded differently to oscillations in membrane potentials due to changes to their potassium channels.

Protists

  • Individual cells of cellular slime moulds coordinate to produce complex structures or move as multicellular entities. Biologist John Bonner pointed out that although slime molds are “no more than a bag of amoebae encased in a thin slime sheath, they manage to have various behaviors that are equal to those of animals who possess muscles and nerves with ganglia -- that is, simple brains.”
  • The single-celled ciliate Stentor roeselii expresses a sort of "behavioral hierarchy" and can 'change its mind' if its response to an irritant does not relieve the irritant, implying a very speculative sense of 'cognition'.
  • Paramecium, specifically P. caudatum, is capable of learning to associate intense light with stimulus such as electric shocks in its swimming medium; although it appears to be unable to associate darkness with electric shocks.
  • Protozoan ciliate Tetrahymena has the capacity to 'memorize' the geometry of its swimming area. Cells that were separated and confined in a droplet of water, recapitulated circular swimming trajectories upon release. This may result mainly from a rise in intracellular calcium.

Applications

Bacterial colony optimisation

Bacterial colony optimization is an algorithm used in evolutionary computing. The algorithm is based on a lifecycle model that simulates some typical behaviors of E. coli bacteria during their whole lifecycle, including chemotaxis, communication, elimination, reproduction, and migration.

Slime mold computing

Logical circuits can be built with slime moulds. Distributed systems experiments have used them to approximate motorway graphs. The slime mould Physarum polycephalum is able to solve the Traveling Salesman Problem, a combinatorial test with exponentially increasing complexity, in linear time.

Soil ecology

Microbial community intelligence is found in soil ecosystems in the form of interacting adaptive behaviors and metabolisms. According to Ferreira et al., "Soil microbiota has its own unique capacity to recover from change and to adapt to the present state[...] [This] capacity to recover from change and to adapt to the present state by altruistic, cooperative and co-occurring behavior is considered a key attribute of microbial community intelligence."

Many bacteria that exhibit complex behaviors or coordination are heavily present in soil in the form of biofilms. Micropredators that inhabit soil, including social predatory bacteria, have significant implications for its ecology. Soil biodiversity, managed in part by these micropredators, is of significant importance for carbon cycling and ecosystem functioning.

The complicated interaction of microbes in the soil has been proposed as a potential carbon sink. Bioaugmentation has been suggested as a method to increase the 'intelligence' of microbial communities, that is, adding the genomes of autotrophic, carbon-fixing or nitrogen-fixing bacteria to their metagenome.

Peripheral tolerance

From Wikipedia, the free encyclopedia

In immunology, peripheral tolerance is the second branch of immunological tolerance, after central tolerance. It takes place in the immune periphery (after T and B cells egress from primary lymphoid organs). Its main purpose is to ensure that self-reactive T and B cells which escaped central tolerance do not cause autoimmune disease. Peripheral tolerance prevents immune response to harmless food antigens and allergens, too.

Deletion of self-reactive T cells in the thymus is only 60-70% efficient, and naive T cell repertoire contains a significant portion of low-avidity self-reactive T cells. These cells can trigger an autoimmune response, and there are several mechanisms of peripheral tolerance to prevent their activation. Antigen-specific mechanisms of peripheral tolerance include persistent of T cell in quiescence, ignorance of antigen and direct inactivation of effector T cells by either clonal deletion, conversion to regulatory T cells (Tregs) or induction of anergy. Tregs, which are also generated during thymic T cell development, further suppress the effector functions of conventional lymphocytes in the periphery. Dendritic cells (DCs) participate in the negative selection of autoreactive T cells in the thymus, but they also mediate peripheral immune tolerance through several mechanisms.

Dependence of a particular antigen on either central or peripheral tolerance is determined by its abundance in the organism. B cell peripheral tolerance is much less studied and is largely mediated by B cell dependence on T cell help.

Cells mediating peripheral tolerance

Regulatory T cells

Tregs are the central mediators of immune suppression and they play a key role in maintaining peripheral tolerance. The master regulator of Treg phenotype and function is Foxp3. Natural Tregs (nTregs) are generated in the thymus during the negative selection. TCR of nTregs shows a high affinity for self-peptides, Induced Tregs (iTreg) develop from conventional naive helper T cells after antigen recognition in presence of TGF-β and IL-2. iTregs are enriched in the gut to establish tolerance to commensal microbiota and harmless food antigens. Regardless of their origin, once present Tregs use several different mechanisms to suppress autoimmune reactions. These include depletion of IL-2 from the environment, secretion of anti-inflammatory cytokines IL-10, TGF-β and IL-35 and induction of apoptosis of effector cells. CTLA-4 is a surface molecule present on Tregs which can prevent CD28 mediated costimulation of T cells after TCR antigen recognition.  

Tolerogenic DCs

DCs are a major cell population responsible for the initiation of the adaptive immune response. They present short peptides on MHCII, which are recognized by specific TCR. After encountering an antigen with recognition danger or pathogen-associated molecular patterns, DCs start the secretion of proinflammatory cytokines, express costimulatory molecules CD80 and CD86 and migrate to the lymph nodes to activate naive T cells.  However, immature DCs (iDCs) are able to induce both CD4 and CD8 tolerance. The immunogenic potential of iDCs is weak, because of the low expression of costimulatory molecules and a modest level of MHCII. iDCs perform endocytosis and phagocytosis of foreign antigens and apoptotic cells, which occurs physiologically in peripheral tissues. Antigen-loaded iDCs migrate to the lymph nodes, secrete IL-10, TGF-β and present antigen to the naive T cells without costimulation. If the T cell recognizes the antigen, it is turned into the anergic state, depleted or converted to Treg. iDCs are more potent Treg inducers than lymph node resident DCs. BTLA is a crucial molecule for DCs mediated Treg conversion. Tolerogenic DCs express FasL and TRAIL to directly induce apoptosis of responding T cells. They also produce indoleamine 2,3-dioxygenase (IDO) to prevent T cell proliferation. Retinoic acid is secreted to support iTreg differentiation, too. Nonetheless, upon maturation (for example during the infection) DCs largely lose their tolerogenic capabilities.

LNSCs

Aside from dendritic cells, additional cell populations were identified that are able to induce antigen-specific T cell tolerance. These are mainly the members of lymph node stromal cells (LNSCs). LNSCs are generally divided into several subpopulations based on the expression of gp38 (PDPN) and CD31 surface markers. Among those, only fibroblastic reticular cells and lymphatic endothelial cells (LECs) were shown to play a role in peripheral tolerance. Both of those populations are able to induce CD8 T cell tolerance by the presentation of the endogenous antigens on MHCI molecules. LNSCs lack expression of the autoimmune regulator, and the production of autoantigens depends on transcription factor Deaf1. LECs express PD-L1 to engage PD-1 on CD8 T cells to restrict self-reactivity. LNSCs can drive the CD4 T cell tolerance by the presentation of the peptide-MHCII complexes, which they acquired from the DCs. On the other hand, LECs can serve as a self-antigen reservoir and can transport self-antigens to DCs to direct self-peptide-MHCII presentation to CD4 T cells. In mesenteric lymph nodes(mLN), LNSCs can induce Tregs directly by secretion of TGF-β or indirectly by imprinting mLN-resident DCs.

Intrinsic mechanisms of T cell peripheral tolerance

Although the majority of self-reactive T cell clones are deleted in the thymus by the mechanisms of central tolerance, low affinity self-reactive T cells continuously escape to the immune periphery. Therefore, additional mechanisms exist to prevent self-reactive and unrestained T cells responses.

Quiescence

When naive T cells exit the thymus, they are in a quiescent state. That means they are in the G0 stage of the cell cycle and they have low metabolic, transcriptional and translational activities. Quiescence can prevent naive T cell activation after tonic signaling. After antigen exposure and costimulation, naive T cells start the process called quiescence exit, which results in proliferation and effector differentiation.

Ignorance

Self-reactive T cells can fail to initiate immune response after recognition of self-antigen. The intrinsic mechanism of ignorance is when the affinity of TCR to antigen is too low to elicit T cell activation. There is also an extrinsic mechanism. Antigens, which are present in generally low numbers, can´t stimulate T cells sufficiently. Specialized mechanisms ensuring ignorance by the immune system have developed in so-called immune privileged organs. The abundance of antigen and anatomical location is the most important factors in T cell ignorance. In the inflammatory context, T cells can override ignorance and induce autoimmune disease.

Anergy

Anergy is a state of functional unresponsiveness induced upon self antigen recognition. T-cells can be made non-responsive to antigens presented if the T-cell engages an MHC molecule on an antigen presenting cell (signal 1) without engagement of costimulatory molecules (signal 2). Co-stimulatory molecules are upregulated by cytokines (signal 3) in the context of acute inflammation. Without pro-inflammatory cytokines, co-stimulatory molecules will not be expressed on the surface of the antigen presenting cell, and so anergy will result if there is an MHC-TCR interaction between the T cell and the APC.  TCR stimulation leads to translocation of NFAT into the nucleus. In the absence of costimulation, there is no MAPK signaling in T cells and translocation of transcription factor AP-1 into the nucleus is impaired. This disbalance of transcription factors in T cells results in the expression of several genes involved in forming an anergic state.  Anergic T cells show long-lasting epigenetic programming that silences effector cytokine production. Anergy is reversible and T cells can recover their functional responsiveness in the absence of the antigen.  

Peripheral deletion

After T cell response to co-stimulation-deficient antigen, a minor population of T cells develop anergy and a large proportion of T cells are rapidly lost by apoptosis. This cell death can be mediated by intrinsic pro-apoptotic family member BIM. The balance between proapoptotic BIM and the antiapoptotic mediator BCL-2 determine the eventual fate of the tolerized T cell.  There are also extrinsic mechanisms of deletion mediated by the cytotoxic activity of Fas/FasL or TRAIL/TRAILR interaction.

Immunoprivileged organs

Potentially self-reactive T-cells are not activated at immunoprivileged sites, where antigens are expressed in non-surveillanced areas. This can occur in the testes, for instance. Anatomical barriers can separate the lymphocytes from the antigen, an example is the central nervous system (the blood-brain-barrier). Naive T-cells are not present in high numbers in peripheral tissue but stay mainly in the circulation and lymphoid tissue.

Some antigens are at a too low concentration to cause an immune response – a subthreshold stimulation will lead to apoptosis of a T cell.

These sites include the anterior chamber of the eye, the testes, the placenta and the fetus, and the central nervous system. These areas are protected by several mechanisms: Fas-ligand expression binds Fas on lymphocytes inducing apoptosis, anti-inflammatory cytokines (including TGF-beta and interleukin 10) and blood-tissue-barrier with tight junctions between endothelial cells.

In the placenta IDO breaks down tryptophan, creating a "tryptophan desert" micro environment which inhibits lymphocyte proliferation.

Split tolerance

Since many pathways of immunity are interdependent, they do not all need to be tolerised. For example, tolerised T cells will not activate autoreactive B cells. Without this help from CD4 T cells, the B cells will not be activated.

Archetype

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Archetype The concept of an archetyp...