Search This Blog

Sunday, February 18, 2024

Analogy

From Wikipedia, the free encyclopedia

Analogy is a comparison or correspondence between two things (or two groups of things) because of a third element that they are considered to share.

In logic, it is an inference or an argument from one particular to another particular, as opposed to deduction, induction, and abduction. It is also used of where at least one of the premises, or the conclusion, is general rather than particular in nature. It has the general form A is to B as C is to D.

In a broader sense, analogical reasoning is a cognitive process of transferring some information or meaning of a particular subject (the analog, or source) onto another (the target); and also the linguistic expression corresponding to such a process. The term analogy can also refer to the relation between the source and the target themselves, which is often (though not always) a similarity, as in the biological notion of analogy.

Ernest Rutherford's model of the atom (modified by Niels Bohr) made an analogy between the atom and the Solar System.

Analogy plays a significant role in human thought processes. It has been argued that analogy lies at "the core of cognition".

Etymology

The English word analogy derives from the Latin analogia, itself derived from the Greek ἀναλογία, "proportion", from ana- "upon, according to" [also "again", "anew"] + logos "ratio" [also "word, speech, reckoning"].

Models and theories

Analogy plays a significant role in problem solving, as well as decision making, argumentation, perception, generalization, memory, creativity, invention, prediction, emotion, explanation, conceptualization and communication. It lies behind basic tasks such as the identification of places, objects and people, for example, in face perception and facial recognition systems. Hofstadter has argued that analogy is "the core of cognition".

An analogy is not a figure of speech but a kind of thought. Specific analogical language uses exemplification, comparisons, metaphors, similes, allegories, and parables, but not metonymy. Phrases like and so on, and the like, as if, and the very word like also rely on an analogical understanding by the receiver of a message including them. Analogy is important not only in ordinary language and common sense (where proverbs and idioms give many examples of its application) but also in science, philosophy, law and the humanities.

The concepts of association, comparison, correspondence, mathematical and morphological homology, homomorphism, iconicity, isomorphism, metaphor, resemblance, and similarity are closely related to analogy. In cognitive linguistics, the notion of conceptual metaphor may be equivalent to that of analogy. Analogy is also a basis for any comparative arguments as well as experiments whose results are transmitted to objects that have been not under examination (e.g., experiments on rats when results are applied to humans).

Analogy has been studied and discussed since classical antiquity by philosophers, scientists, theologists and lawyers. The last few decades have shown a renewed interest in analogy, most notably in cognitive science.

Development

  • Aristotle identified analogy in works such as Metaphysics and Nicomachean Ethics
  • Roman lawyers used analogical reasoning and the Greek word analogia.
  • In Islamic logic, analogical reasoning was used for the process of qiyas in Islamic sharia law and fiqh jurisprudence.
  • Medieval lawyers distinguished analogia legis and analogia iuris (see below).
  • The Middle Ages saw an increased use and theorization of analogy.
  • In Christian scholastic theology, analogical arguments were accepted in order to explain the attributes of God.
    • Aquinas made a distinction between equivocal, univocal and analogical terms, the last being those like healthy that have different but related meanings. Not only a person can be "healthy", but also the food that is good for health (see the contemporary distinction between polysemy and homonymy).
    • Thomas Cajetan wrote an influential treatise on analogy. In all of these cases, the wide Platonic and Aristotelian notion of analogy was preserved.

Cajetan named several kinds of analogy that had been used but previously unnamed, particularly: 

  • Analogy of attribution (analogia attributionis) or improper proportionality, e.g., "This food is healthy."
  • Analogy of proportionality (analogia proportionalitatis) or proper proportionality, e.g., "2 is to 1 as 4 is to 2", or "the goodness of humans is relative to their essence as the goodness of God is relative to God's essence."
  • Metaphor, e.g., steely determination.

Identity of relation

In ancient Greek the word αναλογια (analogia) originally meant proportionality, in the mathematical sense, and it was indeed sometimes translated to Latin as proportio. Analogy was understood as identity of relation between any two ordered pairs, whether of mathematical nature or not.

Analogy and abstraction are different cognitive processes, and analogy is often an easier one. This analogy is not comparing all the properties between a hand and a foot, but rather comparing the relationship between a hand and its palm to a foot and its sole. While a hand and a foot have many dissimilarities, the analogy focuses on their similarity in having an inner surface.

The same notion of analogy was used in the US-based SAT college admission tests, that included "analogy questions" in the form "A is to B as C is to what?" For example, "Hand is to palm as foot is to ____?" These questions were usually given in the Aristotelian format: HAND : PALM : : FOOT : ____ While most competent English speakers will immediately give the right answer to the analogy question (sole), it is more difficult to identify and describe the exact relation that holds both between pairs such as hand and palm, and between foot and sole. This relation is not apparent in some lexical definitions of palm and sole, where the former is defined as the inner surface of the hand, and the latter as the underside of the foot.

Kant's Critique of Judgment held to this notion of analogy, arguing that there can be exactly the same relation between two completely different objects.

Shared abstraction

In several cultures, the Sun is the source of an analogy to God.

Greek philosophers such as Plato and Aristotle used a wider notion of analogy. They saw analogy as a shared abstraction. Analogous objects did not share necessarily a relation, but also an idea, a pattern, a regularity, an attribute, an effect or a philosophy. These authors also accepted that comparisons, metaphors and "images" (allegories) could be used as arguments, and sometimes they called them analogies. Analogies should also make those abstractions easier to understand and give confidence to those who use them.

James Francis Ross in Portraying Analogy (1982), the first substantive examination of the topic since Cajetan's De Nominum Analogia, demonstrated that analogy is a systematic and universal feature of natural languages, with identifiable and law-like characteristics which explain how the meanings of words in a sentence are interdependent.

Special case of induction

On the contrary, Ibn Taymiyya, Francis Bacon and later John Stuart Mill argued that analogy is simply a special case of induction. In their view analogy is an inductive inference from common known attributes to another probable common attribute, which is known about only in the source of the analogy, in the following form:

Premises
a is C, D, E, F, G
b is C, D, E, F
Conclusion
b is probably G.

Shared structure

According to Shelley (2003), the study of the coelacanth drew heavily on analogies from other fish.

Contemporary cognitive scientists use a wide notion of analogy, extensionally close to that of Plato and Aristotle, but framed by Gentner's (1983) structure mapping theory. The same idea of mapping between source and target is used by conceptual metaphor and conceptual blending theorists. Structure mapping theory concerns both psychology and computer science. According to this view, analogy depends on the mapping or alignment of the elements of source and target. The mapping takes place not only between objects, but also between relations of objects and between relations of relations. The whole mapping yields the assignment of a predicate or a relation to the target. Structure mapping theory has been applied and has found considerable confirmation in psychology. It has had reasonable success in computer science and artificial intelligence (see below). Some studies extended the approach to specific subjects, such as metaphor and similarity.

Applications and types

Logic

Logicians analyze how analogical reasoning is used in arguments from analogy.

An analogy can be stated using is to and as when representing the analogous relationship between two pairs of expressions, for example, "Smile is to mouth, as wink is to eye." In the field of mathematics and logic, this can be formalized with colon notation to represent the relationships, using single colon for ratio, and double colon for equality.

In the field of testing, the colon notation of ratios and equality is often borrowed, so that the example above might be rendered, "Smile : mouth :: wink : eye" and pronounced the same way.

Linguistics

  • An analogy can be the linguistic process that reduces word forms thought to break rules to more common forms that follow these rules. For example, the English verb help once had the preterite (simple past tense in English) holp and the past participle holpen. These old-fashioned forms have been discarded and replaced by helped by using the power of analogy (or by applying the more frequently used Verb-ed rule.) This is called morphological leveling. Analogies can sometimes create rule-breaking forms; one example is the American English past tense form of dive: dove, formed on analogy with words such as drive: drove.
  • Neologisms can also be formed by analogy with existing words. A good example is software, formed by analogy with hardware; other analogous neologisms such as firmware and vapourware have followed. Another example is the humorous term underwhelm, formed by analogy with overwhelm.
  • Some people present analogy as an alternative to generative rules for explaining the productive formation of structures such as words. Others argue that they are in fact the same and that rules are analogies that have essentially become standard parts of the linguistic system, whereas clearer cases of analogy have simply not (yet) done so (e.g. Langacker 1987.445–447). This view agrees with the current views of analogy in cognitive science which are discussed above.

Analogy is also a term used in the Neogrammarian school of thought as a catch-all to describe any morphological change in a language that cannot be explained merely sound change or borrowing.

Science

Analogies are mainly used as a means of creating new ideas and hypotheses, or testing them, which is called a heuristic function of analogical reasoning.

Analogical arguments can also be probative, meaning that they serve as a means of proving the rightness of particular theses and theories. This application of analogical reasoning in science is debatable. Analogy can help prove important theories, especially in those kinds of science in which logical or empirical proof is not possible such as theology, philosophy or cosmology when it relates to those areas of the cosmos (the universe) that are beyond any data-based observation and knowledge about them stems from the human insight and thinking outside the senses.

Analogy can be used in theoretical and applied sciences in the form of models or simulations which can be considered as strong indications of probable correctness. Other, much weaker, analogies may also assist in understanding and describing nuanced or key functional behaviours of systems that are otherwise difficult to grasp or prove. For instance, an analogy used in physics textbooks compares electrical circuits to hydraulic circuits. Another example is the analogue ear based on electrical, electronic or mechanical devices.

Mathematics

Some types of analogies can have a precise mathematical formulation through the concept of isomorphism. In detail, this means that if two mathematical structures are of the same type, an analogy between them can be thought of as a bijection which preserves some or all of the relevant structure. For example, and are isomorphic as vector spaces, but the complex numbers, , have more structure than does: is a field as well as a vector space.

Category theory takes the idea of mathematical analogy much further with the concept of functors. Given two categories C and D, a functor f from C to D can be thought of as an analogy between C and D, because f has to map objects of C to objects of D and arrows of C to arrows of D in such a way that the structure of their respective parts is preserved. This is similar to the structure mapping theory of analogy of Dedre Gentner, because it formalises the idea of analogy as a function which makes certain conditions true.

Artificial intelligence

A computer algorithm has achieved human-level performance on multiple-choice analogy questions from the SAT test. The algorithm measures the similarity of relations between pairs of words (e.g., the similarity between the pairs HAND:PALM and FOOT:SOLE) by statistically analysing a large collection of text. It answers SAT questions by selecting the choice with the highest relational similarity.

The analogical reasoning in the human mind is free of the false inferences plaguing conventional artificial intelligence models, (called systematicity). Steven Phillips and William H. Wilson use category theory to mathematically demonstrate how such reasoning could arise naturally by using relationships between the internal arrows that keep the internal structures of the categories rather than the mere relationships between the objects (called "representational states"). Thus, the mind, and more intelligent AIs, may use analogies between domains whose internal structures transform naturally and reject those that do not.

Keith Holyoak and Paul Thagard (1997) developed their multiconstraint theory within structure mapping theory. They defend that the "coherence" of an analogy depends on structural consistency, semantic similarity and purpose. Structural consistency is the highest when the analogy is an isomorphism, although lower levels can be used as well. Similarity demands that the mapping connects similar elements and relationships between source and target, at any level of abstraction. It is the highest when there are identical relations and when connected elements have many identical attributes. An analogy achieves its purpose if it helps solve the problem at hand. The multiconstraint theory faces some difficulties when there are multiple sources, but these can be overcome. Hummel and Holyoak (2005) recast the multiconstraint theory within a neural network architecture. A problem for the multiconstraint theory arises from its concept of similarity, which, in this respect, is not obviously different from analogy itself. Computer applications demand that there are some identical attributes or relations at some level of abstraction. The model was extended (Doumas, Hummel, and Sandhofer, 2008) to learn relations from unstructured examples (providing the only current account of how symbolic representations can be learned from examples).

Mark Keane and Brayshaw (1988) developed their Incremental Analogy Machine (IAM) to include working memory constraints as well as structural, semantic and pragmatic constraints, so that a subset of the base analogue is selected and mapping from base to target occurs in series. Empirical evidence shows that humans are better at using and creating analogies when the information is presented in an order where an item and its analogue are placed together.

Eqaan Doug and his team challenged the shared structure theory and mostly its applications in computer science. They argue that there is no clear line between perception, including high-level perception, and analogical thinking. In fact, analogy occurs not only after, but also before and at the same time as high-level perception. In high-level perception, humans make representations by selecting relevant information from low-level stimuli. Perception is necessary for analogy, but analogy is also necessary for high-level perception. Chalmers et al. concludes that analogy actually is high-level perception. Forbus et al. (1998) claim that this is only a metaphor. It has been argued (Morrison and Dietrich 1995) that Hofstadter's and Gentner's groups do not defend opposite views, but are instead dealing with different aspects of analogy.

Anatomy

In anatomy, two anatomical structures are considered to be analogous when they serve similar functions but are not evolutionarily related, such as the legs of vertebrates and the legs of insects. Analogous structures are the result of independent evolution and should be contrasted with structures which shared an evolutionary line.

Engineering

Often a physical prototype is built to model and represent some other physical object. For example, wind tunnels are used to test scale models of wings and aircraft which are analogous to (correspond to) full-size wings and aircraft.

For example, the MONIAC (an analogue computer) used the flow of water in its pipes as an analogue to the flow of money in an economy.

Cybernetics

Where two or more biological or physical participants meet, they communicate and the stresses produced describe internal models of the participants. Pask in his conversation theory asserts an analogy that describes both similarities and differences between any pair of the participants' internal models or concepts exists.

History

In historical science, comparative historical analysis often uses the concept of analogy and analogical reasoning. Recent methods involving calculation operate on large document archives, allowing for analogical or corresponding terms from the past to be found as a response to random questions by users (e.g., Myanmar - Burma) and explained.

Morality

Analogical reasoning plays a very important part in morality. This may be because morality is supposed to be impartial and fair. If it is wrong to do something in a situation A, and situation B corresponds to A in all related features, then it is also wrong to perform that action in situation B. Moral particularism accepts such reasoning, instead of deduction and induction, since only the first can be used regardless of any moral principles.

Psychology

Structure mapping theory

Structure mapping, originally proposed by Dedre Gentner, is a theory in psychology that describes the psychological processes involved in reasoning through, and learning from, analogies. More specifically, this theory aims to describe how familiar knowledge, or knowledge about a base domain, can be used to inform an individual's understanding of a less familiar idea, or a target domain. According to this theory, individuals view their knowledge of ideas, or domains, as interconnected structures. In other words, a domain is viewed as consisting of objects, their properties, and the relationships that characterise their interactions. The process of analogy then involves:

  1. Recognising similar structures between the base and target domains.
  2. Finding deeper similarities by mapping other relationships of a base domain to the target domain.
  3. Cross-checking those findings against existing knowledge of the target domain.

In general, it has been found that people prefer analogies where the two systems correspond highly to each other (e.g. have similar relationships across the domains as opposed to just having similar objects across domains) when these people try to compare and contrast the systems. This is also known as the systematicity principle.

An example that has been used to illustrate structure mapping theory comes from Gentner and Gentner (1983) and uses the base domain of flowing water and the target domain of electricity. In a system of flowing water, the water is carried through pipes and the rate of water flow is determined by the pressure of the water towers or hills. This relationship corresponds to that of electricity flowing through a circuit. In a circuit, the electricity is carried through wires and the current, or rate of flow of electricity, is determined by the voltage, or electrical pressure. Given the similarity in structure, or structural alignment, between these domains, structure mapping theory would predict that relationships from one of these domains, would be inferred in the other using analogy.

Children

Children do not always need prompting to make comparisons in order to learn abstract relationships. Eventually, children undergo a relational shift, after which they begin seeing similar relations across different situations instead of merely looking at matching objects. This is critical in their cognitive development as continuing to focus on specific objects would reduce children's ability to learn abstract patterns and reason analogically. Interestingly, some researchers have proposed that children's basic brain functions (i.e., working memory and inhibitory control) do not drive this relational shift. Instead, it is driven by their relational knowledge, such as having labels for the objects that make the relationships clearer(see previous section). However, there is not enough evidence to determine whether the relational shift is actually because basic brain functions become better or relational knowledge becomes deeper.

Additionally, research has identified several factors that may increase the likelihood that a child may spontaneously engage in comparison and learn an abstract relationship, without the need for prompts. Comparison is more likely when the objects to be compared are close together in space and/or time,  are highly similar (although not so similar that they match, which interfere with identifying relationships), or share common labels.

Law

In law, analogy is primarily used to resolve issues on which there is no previous authority. A distinction can be made between analogical reasoning employed in statutory law and analogical reasoning present in precedential law (case law).

Statutory

In statutory law analogy is used in order to fill the so-called lacunas, gaps or loopholes.

  • A gap arises when a specific case or legal issue is not clearly dealt with in written law. Then, one may identify a provision required by law which covers the cases that are similar to the case at hand and apply this provision to this case by analogy. Such a gap, in civil law countries, is referred to as a gap extra legem (outside of the law), while analogy which closes it is termed analogy extra legem (outside of the law). The very case at hand is named: an unprovided case.
  • A second gap comes into being when there is a law-controlled provision which applies to the case at hand but this provision leads in this case to an unwanted outcome. Then, one may try to find another law-controlled provision that covers cases similar to the case at hand, using analogy to act upon this provision instead of the provision that applies to it directly. This kind of gap is called a gap contra legem (against the law), while analogy which fills this gap is referred to as analogy contra legem (against the law).
  • A third gap occurs where a law-controlled provision regulates the case at hand, but is unclear or ambiguous. In such circumstances, to decide the case at hand, one may try to find out what this provision means by relying on law-controlled provisions which address cases that are similar to the case at hand or other cases that are regulated by this unclear/ambiguous provision for help. A gap of this type is named gap intra legem (within the law) and analogy which deals with it is referred to as analogy intra legem (within the law). In Equity, the expression infra legem is used (below the law).

The similarity upon which law-controlled analogy depends on may depend on the resemblance of raw facts of the cases being compared, the purpose (the so-called ratio legis which is generally the will of the legislature) of a law-controlled provision which is applied by analogy or some other sources.

Law-controlled analogy may be also based upon more than one statutory provision or even a spirit of law. In the latter case, it is called analogia iuris (from the law in general) as opposed to analogia legis (from a specific legal provision or provisions).

Case

In case law (precedential law), analogies can be drawn from precedent cases. The judge who decides the case at hand may find that the facts of this case are similar to the facts of one of the prior cases to an extent that the outcomes of these cases are treated as the same or similar: stare decesis. Such use of analogy in precedential law is related or connected to the so-called cases of first impression in name, i.e. the cases which have not been regulated by any binding judge's precedent (are not covered by a precedential rule of such a precedent).

Reasoning from (dis)analogy is also sufficiently employed, while a judge is distinguishing a precedent. That is, upon the discerned differences between the case at hand and the precedential case, a judge rejects to decide the case upon the precedent whose precedential rule embraces the case at hand.

There is also much room for some other uses of analogy in precedential law. One of them is resort to analogical reasoning, while resolving the conflict between two or more precedents which all apply to the case at hand despite dictating different legal outcomes for that case. Analogy can also take part in verifying the contents of ratio decidendi, deciding upon precedents that have become irrelevant or quoting precedents form other jurisdictions. It is visible in legal Education, notably in the US (the so-called 'case method').

Restrictions and Civil Law

The law of every jurisdiction is different. In legal matters, sometimes the use of analogy is forbidden (by the very law or common agreement between judges and scholars): the most common instances concern criminal, international, administrative and tax law, especially in jurisdictions which do not have a common law system. For example:

  • Analogy should not be resorted to in criminal matters whenever its outcome would be unfavorable to the accused or suspect. Such a ban finds its footing in the principle: "nullum crimen, nulla poena sine lege", which is understood in the way that there is no crime (punishment) unless it is plainly provided for in a law-controlled provision or an already existing judicial precedent.
  • Analogy should be applied with caution in the domain of tax law. Here, the principle: "nullum tributum sine lege" justifies a general ban on the usage of analogy that would lead to an increase in taxation or whose results would – for some other reason – be harmful to the interests of taxpayers.
  • Extending by analogy those provisions of administrative law that restrict human rights and the rights of the citizens (particularly the category of the so-called "individual rights" or "basic rights") is prohibited in many jurisdictions. Analogy generally should also not be resorted to in order to make the citizen's burdens and obligations larger.
  • The other limitations on the use of analogy in law, among many others, apply to:
    • the analogical extension of statutory provisions that involve exceptions to more general law-controlled regulation or provisions (this restriction flows from the well-known, especially in civil law continental legal systems, Latin maxims: "exceptiones non sunt excendentae", "exception est strictissimae interpretationis" and "singularia non sunt extendenda")
    • the usage of an analogical argument with regard to those law-controlled provisions which comprise lists (enumerations)
    • extending by analogy those law-controlled provisions that give the impression that the Legislator intended to regulate some issues in an exclusive (exhaustive) manner (such a manner is especially implied when the wording of a given statutory provision involves such pointers as: "only", "exclusively", "solely", "always", "never") or which have a plain precise meaning.

In civil law jurisdictions, analogy may be permitted or required by law. But also in this branch of law there are some restrictions confining the possible scope of the use of an analogical argument. Such is, for instance, the prohibition to use analogy in relation to provisions regarding time limits or a general ban on the recourse to analogical arguments which lead to extension of those statutory provisions which envisage some obligations or burdens or which order (mandate) something. The other examples concern the usage of analogy in the field of property law, especially when one is going to create some new property rights by it or to extend these statutory provisions whose terms are unambiguous (unequivocal) and plain (clear), e.g.: be of or under a certain age.

Teaching strategies

Analogies as defined in rhetoric are a comparison between words, but an analogy more generally can also be used to illustrate and teach. To enlighten pupils on the relations between or within certain concepts, items or phenomena, a teacher may refer to other concepts, items or phenomena that pupils are more familiar with. It may help to create or clarify one theory (or theoretical model) via the workings of another theory (or theoretical model). Thus an analogy, as used in teaching, would be comparing a topic that students are already familiar with, with a new topic that is being introduced, so that students can get a better understanding of the new topic by relating back to existing knowledge. This can be particularly helpful when the analogy serves across different disciplines: indeed, there are various teaching innovations now emerging that use sight-based analogies for teaching and research across subjects such as science and the humanities.

Shawn Glynn, a professor in the department of educational psychology and instructional technology at the University of Georgia, developed a theory on teaching with analogies and developed steps to explain the process of teaching with this method. The steps for teaching with analogies are as follows: Step one is introducing the new topic that is about to be taught and giving some general knowledge on the subject. Step two is reviewing the concept that the students already know to ensure they have the proper knowledge to assess the similarities between the two concepts. Step three is finding relevant features within the analogy of the two concepts. Step four is finding similarities between the two concepts so students are able to compare and contrast them in order to understand. Step five is indicating where the analogy breaks down between the two concepts. And finally, step six is drawing a conclusion about the analogy and comparing the new material with the already learned material. Typically this method is used to learn topics in science.

In 1989, teacher Kerry Ruef began a program titled The Private Eye Project. It is a method of teaching that revolves around using analogies in the classroom to better explain topics. She thought of the idea to use analogies as a part of curriculum because she was observing objects once and she said, "my mind was noting what else each object reminded me of..." This led her to teach with the question, "what does [the subject or topic] remind you of?" The idea of comparing subjects and concepts led to the development of The Private Eye Project as a method of teaching. The program is designed to build critical thinking skills with analogies as one of the main themes revolving around it. While Glynn focuses on using analogies to teach science, The Private Eye Project can be used for any subject including writing, math, art, social studies, and invention. It is now used by thousands of schools around the country.

Religion

Catholicism

The Fourth Lateran Council of 1215 taught: For between creator and creature there can be noted no similarity so great that a greater dissimilarity cannot be seen between them.

The theological exploration of this subject is called the analogia entis. The consequence of this theory is that all true statements concerning God (excluding the concrete details of Jesus' earthly life) are rough analogies, without implying any falsehood. Such analogical and true statements would include God is, God is Love, God is a consuming fire, God is near to all who call him, or God as Trinity, where being, love, fire, distance, number must be classed as analogies that allow human cognition of what is infinitely beyond positive or negative language.

The use of theological statements in syllogisms must take into account their analogical essence, in that every analogy breaks down when stretched beyond its intended meaning.

Islam

Islamic jurisprudence makes ample use of analogy as a means of making conclusions from outside sources of law. The bounds and rules employed to make analogical deduction vary greatly between madhhabs and to a lesser extent individual scholars. It is nonetheless a generally accepted source of law within jurisprudential epistemology, with the chief opposition to it forming the dhahiri (ostensiblist) school.

Conversation theory

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Conversation_theory

Conversation theory is a cybernetic approach to the study of conversation, cognition and learning that may occur between two participants who are engaged in conversation with each other. It presents an experimental framework heavily utilizing human-computer interactions and computer theoretic models as a means to present a scientific theory explaining how conversational interactions lead to the emergence of knowledge between participants. The theory was developed by Gordon Pask, who credits Bernard Scott, Dionysius Kallikourdis, Robin McKinnon-Wood, and others during its initial development and implementation as well as Paul Pangaro during subsequent years.

Overview

Conversation theory may be described as a formal theory of conversational process, as well as a theoretical methodology concerned with concept-forming and concept-sharing between conversational participants. It may be viewed as a framework that may be used to examine learning and development through the means of conversational techniques by means of human-machine interactions; the results of which may then inform approaches to education, educational psychology, and epistemology. While the framework is interpretable as a psychological framework with educational applications (specifically, as a general framework to think about teaching and learning), Pask's motivation in developing the theory has been interpreted by some who closely worked with him develop upon certain theoretical concerns regarding the nature of cybernetic inquiry.

The theory has been noted to have been influenced by a variety of psychological, pedagogical and philosophical influences such as Lev Vygotsky, R. D. Laing and George H. Mead. With some authors suggesting that the kind of human-machine learning interactions documented in conversation theory to be mirroring Vygotsky's descriptions of the zone of proximal development, and his descriptions of spontaneous and scientific concepts.

The theory prioritizes learning and teaching approaches related to education. A central idea of the theory is that learning occurs through conversations: For if participant A is to be conscious with participant B of a topic of inquiry, both participants must be able to converse with each other about that topic. Because of this, participants engaging in a discussion about a subject matter make their knowledge claims explicit through the means of such conversational interactions.

The theory is concerned with a variety of "psychological, linguistic, epistemological, social or non-commitally mental events of which there is awareness". Awareness in this sense is not of a person-specific type, i.e., it is not necessarily localized in a single participant. Instead, the type of awareness examined in conversation theory is the kind of joint awareness that may be shared between entities. While there is an acknowledgment of its similarities to phenomenology, the theory extends its analysis to examine cognitive processes. However, the concept of cognition is not viewed as merely being confined to an individual's brain or central nervous system. Instead, cognition may occur at the level of a group of people (leading to the emergence of social awareness), or may characterize certain types of computing machines.

Initial results from the theory lead to a distinction in the type of learning strategies participants used during the learning process; whereby students in general gravitated towards holistic or serialist learning strategies (with the optimal mixture producing a versatile learning strategy).

Conversation

Following Hugh Dubberly and Paul Pangaro, a conversation in the context of conversation theory involves an exchange between two participants whereby each participant is contextualized as a learning system whose internal states are changed through the course of the conversation. What can be discussed through conversation, i.e., topics of discussion, are said to belong to a conversational domain.

Conversation is distinguished from the mere exchange of information as seen in information theory, by the fact that utterances are interpreted within the context of a given perspective of such a learning system. Each participant's meanings and perceptions change during the course of a conversation, and each participant can agree to commit to act in certain ways during the conversation. In this way, conversation permits not only learning but also collaboration through participants coordinating themselves and designating their roles through the means of conversation.

Since meanings are agreed during the course of a conversation, and since purported agreements can be illusory (whereby we think we have the same understanding of a given topic but in fact do not), an empirical approach to the study of conversation would require stable reference points during such conversational exchanges between peers so as to permit reproducible results. Using computer theoretical models of cognition, conversation theory can document these intervals of understanding that arise in the conversations between two participating individuals, such that the development of individual and collective understandings can be analyzed rigorously.

In this way, Pask has been argued to have been an early pioneer in AI-based educational approaches: Having proposed that advances in computational media may enable conversational forms of interactions to take place between man and machine.

Language

The types of languages that conversation theory utilizes in its approach are distinguishable based on a language's role in relation to an experiment in which a conversation is examined as the subject of inquiry; thus, it follows that conversations can be conducted at different levels depending on the role a language has in relation to an experiment. The types of languages are as follows: Natural languages used for general discussions outside the experiment; object languages which are the subject of inquiry during an experiment, and finally a metalanguage which is used to talk about the design, management, and results on an experiment.

A natural language is treated as an unrestricted language used between a source (say a participant) and an interrogator or analyst (say an experimenter). For this reason, it may be considered a language for general discussion in the context of conversation theory. An object language meanwhile, has some of the qualities of a natural language (which permits commands, questions, ostentation and predication), but is used in conversation theory specifically as the language studied during experiments. Finally, the metalanguage is an observational language used by an interrogator or analysis for describing the conversational system under observation, prescribing actions that are permitted within such a system, and posing parameters regarding what may be discussed during an experiment under observation.

The object language differs from most formal languages, by virtue of being "a command and question language[,] not an assertoric language like [a] predicate calculus". Moreover, is a language primarily dealing with metaphors indicating material analogies and not on the kind of propositions dealing with truth or falsity values. Since conversation theory specifically focuses on learning and development within human subjects, the object language is separated into two distinct modes of conversing.

Conversation theory conceptualises learning as being the result of two integrated levels of control: The first level of control is designated by and designates a set of problem-solving procedures which attempt to attain goals or subgoals, whereas the second level of control is designated as and denotes various constructive processes that have been acquired by a student through maturation, imprinting and previous learning. The object language then is demarcated in conversation theory based on these considerations, whereby it is split between and lines of inquiry such that an object language is the ordered pair of such discourse types . According to Bernard Scott, discourse of an object language may be conceptualized as the level of how, i.e., discourse that is concerned with "how to “do” a topic: how to recognize it, construct it, maintain it and so on". Meanwhile, discourse may be conceptualized as the level of why, i.e., it is discourse "concerned with explaining or justifying what a topic means in terms of other topics".

Concepts

A concept in conversation theory, is conceived of as the production, reproduction, and maintenance of a given topic relation from other topic relations , all belonging to a given conversational domain . This implies , where and are used to represent a number on a finite index of numbers. A concept must satisfy the twin condition that it must entail and be entailed by other topics.

A concept in the context of conversation theory is not a class, nor description of a class, nor a stored description: Instead, a concept is specifically used to reconstruct, reproduce or stabilize relations. Thus, if is the head topic of discussion, then implies that the concept of that relation produces, reproduces, and maintains that relation.

Now, a concept itself is considered to consist of the ordered pair containing a program and an interpretation:



Whereby a program attempts to derive a given topic relation, while an interpretation refers to the compilation of that program. In other words, given a specific topic relation, a program attempts to derive that relation through a series of other topic relations, which are compiled in such a way as to derive the initial topic relation. A concept as defined above is considered to be a -procedure, which is embodied by an underlying processor called a -processor.

In this way, Pask envisages concepts as mental organisations that hold a hypothesis and seek to test that hypothesis in order to confirm or deny its validity. This notion of a concept has been noted as formally resembling a TOTE cycle discussed by Miller, Galanter and Pribram. The contents and structure that a concept might have at a given interaction of its continuous deformation can be represented through an entailment structure.

Such conceptual forms are said to be emergent through conversational interactions. They are encapsulated through entailment structures, which is a way by which we may visualize an organized and publicly available collection of resultant knowledge. Entailment structures may afford certain advantages compared to certain semantic network structures, as they force semantic relations to be expressed as belonging to coherent structures. The entailment structure is composed of a series of nodes and arrows representing a series of topic relations and the derivations of such topic relations. For example:

In the above illustration, let , such that there are topic relations that are members of a set of topic relations. Each topic relation is represented by a node, and the entailment represented by the black arc. It follows that in the case above, such that the topics P and Q entail the topic of T.


Assuming we use the same derivation process for all topics in above entailment structure, then we are let with the following product as illustrated above. This represents a minimal entailment mesh consisting of a triad of derivations: , , and . The solid arc indicates that a given head topic relation is derived from subordinate topics, whereas the arcs with dotted lines represent how the head topic may be used to derive other topics. Finally:


Represents two solid arcs permitting alternative derivations of the topic T. This can be expressed as , which reads either the set containing P and Q, or the set containing R and S entail T. Lastly, a formal analogy is shown where two topics T and T' belonging to two entailment meshes are demonstrated to have a one-to-one correspondence with each other. The diamond shape below denotes analogy relation that can be claimed to exist between any three topics of each entailment mesh.


The relation of one topic T to another T' by an analogy can also be seen as: Being based on an isomorphism , a semantic distinction between two individual universes on interpretation . Assuming an analogy holds for two topics in two distinct entailment meshes, then it should hold for all if the analogy is to be considered coherent and stable.

Cognitive Reflector

From conversation theory, Pask developed what he called a "Cognitive Reflector". This is a virtual machine for selecting and executing concepts or topics from an entailment mesh shared by at least a pair of participants. It features an external modelling facility on which agreement between, say, a teacher and pupil may be shown by reproducing public descriptions of behaviour. We see this in essay and report writing or the "practicals" of science teaching.

Lp was Pask's protolanguage which produced operators like Ap which concurrently executes the concept, Con, of a Topic, T, to produce a Description, D. Thus:

Ap(Con(T)) => D(T), where => stands for produces.

A succinct account of these operators is presented in Pask Amongst many insights he points out that three indexes are required for concurrent execution, two for parallel and one to designate a serial process. He subsumes this complexity by designating participants A, B, etc. In Commentary toward the end of Pask, he states:

The form not the content of the theories (conversation theory and interactions of actors theory) return to and is congruent with the forms of physical theories; such as wave particle duality (the set theoretic unfoldment part of conversation theory is a radiation and its reception is the interpretation by the recipient of the descriptions so exchanged, and vice versa). The particle aspect is the recompilation by the listener of what a speaker is saying. Theories of many universes, one at least for each participant A and one to participant B- are bridged by analogy. As before this is the truth value of any interaction; the metaphor for which is culture itself.

Learning strategies

In order to facilitate learning, Pask argued that subject matter should be represented in the form of structures which show what is to be learned. These structures exist in a variety of different levels depending upon the extent of the relationships displayed. The critical method of learning according to Conversation Theory is "teachback" in which one person teaches another what they have learned.

Pask identified two different types of learning strategies:

  • Serialists – Progress through a structure in a sequential fashion
  • Holists – Look for higher order relations

The ideal is the versatile learner who is neither vacuous holist "globe trotter" nor serialist who knows little of the context of his work.

In learning, the stage where one converges or evolves, many Cyberneticians describe the act of understanding as a closed-loop. Instead of simply “taking in” new information, one goes back to look at their understandings and pulls together information that was “triggered” and forms a new connection. This connection becomes tighter and one's understanding of a certain concept is solidified or “stable” (Pangaro, 2003). Furthermore, Gordon Pask emphasized that conflict is the basis for the notion of “calling for'' additional information (Pangaro, 1992).

According to Entwistle, experiments which lead to the investigation of phenomenon later denoted by the term learning strategy came about through the implementation of a variety of learning tasks. Initially, this was done through utilising either CASTE, INTUITION, or the Clobbits pseudo-taxonomy. However, given issues resulting from either the time-consuming nature or operating experiments or inexactness of experimental conditions, new tests were created in the form of the Spy Ring History test and the Smuggler's test. The former test involved a participant having to learn the history of a fictitious spy ring (in other words, the history of a fictitious espionage network); the participant, having to learn about the history of five spies in three countries over the period of five years. The comprehension learning component of the test involved learning the similarities and differences between a set of networks; whereas the operation learning aspect of the test involved learning the role each spy played and what sequence of actions that spy played over a given year.

While Entwistle noted difficulties regarding the length of such tests for groups of students who were engaged in the Spy Ring History test, the results of the test did seem to correspond with the type of learning strategies discussed. However, it has been noted that while Pask and associates work on learning styles has been influential in both the development of conceptual tools and methodology, the Spy Ring History test and Smuggler's test may have been biased towards STEM students than humanities in its implementation, with Entwistle arguing that the "rote learning of formulae and definitions, together with a positive reaction to solving puzzles and problems of a logical nature, are characteristics more commonly found in science than arts student".

Applications

One potential application of conversation theory that has been studied and developed is as an alternative approach to common types of search engine Information retrieval algorithms. Unlike PageRank-like algorithms, which determine the priority of a search result based on how many hyperlinks on the web link to them, conversation theory has been used to apply a discursive approach to web search requests.

ThoughtShuffler is an attempt to build a search engine utilizing design principles from conversation theory: In this approach, terms that are input into a search request yield search results relating to other terms that derive or help provide context to the meaning of the first in a way that mimics derivations of topics in an entailment structure. For example, given the input of a search term, a neighbourhood of corresponding terms that comprise the meaning of the first term may be suggested for the user to explore. In doing this, the search engine interface highlights snippets of webpages corresponding to a neighbourhood terms that help provide meaning to the first.

The aim of this design, is to provide just enough information for a user to become curious about a topic in order to induce the intention to explore other subtopics related to the main term input into the search engine.

Argument from analogy

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Argument_from_analogy

Argument from analogy or false analogy is a special type of inductive argument, where perceived similarities are used as a basis to infer some further similarity that has not been observed yet. Analogical reasoning is one of the most common methods by which human beings try to understand the world and make decisions. When a person has a bad experience with a product and decides not to buy anything further from the producer, this is often a case of analogical reasoning since the two products share a maker and are therefore both perceived as "bad". It is also the basis of much of science; for instance, experiments on laboratory rats are based on the fact that some physiological similarities between rats and humans implies some further similarity (e.g. possible reactions to a drug).

Structure

The process of analogical inference involves noting the shared properties of two or more things, and from this basis concluding that they also share some further property. The structure or form may be generalised like so:

P and Q are similar in respect to properties a, b, and c.
P has been observed to have further property x.
Therefore, Q probably has property x also.

The argument does not assert that the two things are identical, only that they are similar. The argument may provide us with good evidence for the conclusion, but the conclusion does not follow as a matter of logical necessity. Determining the strength of the argument requires that we take into consideration more than just the form: the content must also come under scrutiny.

Analysing arguments from analogy

Strength of an analogy

Several factors affect the strength of the argument from analogy:

  • The relevance (positive or negative) of the known similarities to the similarity inferred in the conclusion.
  • The degree of relevant similarity (or difference) between the two objects.
  • The amount and variety of instances that form the basis of the analogy.

Counterarguments

Arguments from analogy may be attacked by using disanalogy, using counteranalogy, and by pointing out unintended consequences of an analogy. To understand how one might analyse an argument from analogy, consider the teleological argument and its criticisms put forward by the philosopher David Hume.

The logic behind the watchmaker argument states that you cannot assume that a complex and precise object like a watch was created through some random process. We can easily infer that such objects had an intelligent creator who planned its use. Therefore, we ought to draw the same conclusion for another complex and apparently designed object: the universe.

Hume argued that the universe and a watch have many relevant differences. For instance, the universe is often very disorderly and random but a watch is not. This form of argument is called "disanalogy". If the amount and variety of relevant similarities between two objects strengthens an analogical conclusion, then the amount and variety of relevant differences have to weaken it.  Creating a "counteranalogy," Hume argued that some natural objects seem to have order and complexity — snowflakes for example — but are not the result of intelligent direction. But even if the snowflake's order and complexity might not have direction, their causes might. So this falsifies the statement but begs the question. Finally, Hume provides many possible "unintended consequences" of the argument. For instance, objects such as watches are often the result of the labour of groups of individuals. Thus, the reasoning used by the teleological argument would seem to agree with polytheism.

False analogy

A false analogy is an informal fallacy, or a faulty instance, of the argument from analogy.

An argument from analogy is weakened if it is inadequate in any of the above respects. The term "false analogy" comes from the philosopher John Stuart Mill, who was one of the first individuals to examine analogical reasoning in detail. One of Mill's examples involved an inference that some person is lazy from the observation that his or her sibling is lazy. According to Mill, sharing parents is not at all relevant to the property of laziness (although this in particular is an example of a faulty generalisation rather than a false analogy).

Planets in a planetary system orbit a star.
Electrons in an atom orbit a nucleus, and electrons jump instantly from orbit to orbit.
Therefore, planets in a planetary system jump instantly from orbit to orbit.

This is a false analogy because it fails to account for the relevant differences between a planetary system and an atom.

Asteroid mining

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Asteroid_mining ...