Search This Blog

Wednesday, November 20, 2024

Neurophilosophy

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Neurophilosophy

Neurophilosophy or the philosophy of neuroscience is the interdisciplinary study of neuroscience and philosophy that explores the relevance of neuroscientific studies to the arguments traditionally categorized as philosophy of mind. The philosophy of neuroscience attempts to clarify neuroscientific methods and results using the conceptual rigor and methods of philosophy of science.

Specific issues

Below is a list of specific issues important to philosophy of neuroscience:

  • "The indirectness of studies of mind and brain"
  • "Computational or representational analysis of brain processing"
  • "Relations between psychological and neuroscientific inquiries"
  • Modularity of mind
  • What constitutes adequate explanation in neuroscience?
  • "Location of cognitive function"

Indirectness of studies of the mind and brain

Many of the methods and techniques central to neuroscientific discovery rely on assumptions that can limit the interpretation of the data. Philosophers of neuroscience have discussed such assumptions in the use of functional magnetic resonance imaging (fMRI), dissociation in cognitive neuropsychology, single unit recording, and computational neuroscience. Following are descriptions of many of the current controversies and debates about the methods employed in neuroscience.

fMRI

Many fMRI studies rely heavily on the assumption of localization of function (same as functional specialization).

Localization of function means that many cognitive functions can be localized to specific brain regions. An example of functional localization comes from studies of the motor cortex. There seem to be different groups of cells in the motor cortex responsible for controlling different groups of muscles.

Many philosophers of neuroscience criticize fMRI for relying too heavily on this assumption. Michael Anderson points out that subtraction-method fMRI misses a lot of brain information that is important to the cognitive processes. Subtraction fMRI only shows the differences between the task activation and the control activation, but many of the brain areas activated in the control are obviously important for the task as well.

Rejections of fMRI

Some philosophers entirely reject any notion of localization of function and thus believe fMRI studies to be profoundly misguided. These philosophers maintain that brain processing acts holistically, that large sections of the brain are involved in processing most cognitive tasks (see holism in neurology and the modularity section below). One way to understand their objection to the idea of localization of function is the radio repairman thought experiment. In this thought experiment, a radio repairman opens up a radio and rips out a tube. The radio begins whistling loudly and the radio repairman declares that he must have ripped out the anti-whistling tube. There is no anti-whistling tube in the radio and the radio repairman has confounded function with effect. This criticism was originally targeted at the logic used by neuropsychological brain lesion experiments, but the criticism is still applicable to neuroimaging. These considerations are similar to Van Orden's and Paap's criticism of circularity in neuroimaging logic. According to them, neuroimagers assume that their theory of cognitive component parcellation is correct and that these components divide cleanly into feed-forward modules. These assumptions are necessary to justify their inference of brain localization. The logic is circular if the researcher then uses the appearance of brain region activation as proof of the correctness of their cognitive theories.

Reverse inference

A different problematic methodological assumption within fMRI research is the use of reverse inference. A reverse inference is when the activation of a brain region is used to infer the presence of a given cognitive process. Poldrack points out that the strength of this inference depends critically on the likelihood that a given task employs a given cognitive process and the likelihood of that pattern of brain activation given that cognitive process. In other words, the strength of reverse inference is based upon the selectivity of the task used as well as the selectivity of the brain region activation.

A 2011 article published in the New York Times has been heavily criticized for misusing reverse inference. In the study, participants were shown pictures of their iPhones and the researchers measured activation of the insula. The researchers took insula activation as evidence of feelings of love and concluded that people loved their iPhones. Critics were quick to point out that the insula is not a very selective piece of cortex, and therefore not amenable to reverse inference.

The neuropsychologist Max Coltheart took the problems with reverse inference a step further and challenged neuroimagers to give one instance in which neuroimaging had informed psychological theory. Coltheart takes the burden of proof to be an instance where the brain imaging data is consistent with one theory but inconsistent with another theory.

Roskies maintains that Coltheart's ultra cognitive position makes his challenge unwinnable. Since Coltheart maintains that the implementation of a cognitive state has no bearing on the function of that cognitive state, then it is impossible to find neuroimaging data that will be able to comment on psychological theories in the way Coltheart demands. Neuroimaging data will always be relegated to the lower level of implementation and be unable to selectively determine one or another cognitive theory.

In a 2006 article, Richard Henson suggests that forward inference can be used to infer dissociation of function at the psychological level. He suggests that these kinds of inferences can be made when there is crossing activations between two task types in two brain regions and there is no change in activation in a mutual control region.

Pure insertion

One final assumption is the assumption of pure insertion in fMRI. The assumption of pure insertion is the assumption that a single cognitive process can be inserted into another set of cognitive processes without affecting the functioning of the rest. For example, to find the reading comprehension area of the brain, researchers might scan participants while they were presented with a word and while they were presented with a non-word (e.g. "Floob"). If the researchers then infer that the resulting difference in brain pattern represents the regions of the brain involved in reading comprehension, they have assumed that these changes are not reflective of changes in task difficulty or differential recruitment between tasks. The term pure insertion was coined by Donders as a criticism of reaction time methods.

Resting-state functional-connectivity MRI

Recently, researchers have begun using a new functional imaging technique called resting-state functional-connectivity MRI. Subjects' brains are scanned while the subject sits idly in the scanner. By looking at the natural fluctuations in the blood-oxygen-level-dependent (BOLD) pattern while the subject is at rest, the researchers can see which brain regions co-vary in activation together. Afterward, they can use the patterns of covariance to construct maps of functionally-linked brain areas.

The name "functional-connectivity" is somewhat misleading since the data only indicates co-variation. Still, this is a powerful method for studying large networks throughout the brain.

Methodological issues

There are a couple of important methodological issues that need to be addressed. Firstly, there are many different possible brain mappings that could be used to define the brain regions for the network. The results could vary significantly depending on the brain region chosen.

Secondly, what mathematical techniques are best to characterize these brain regions?

The brain regions of interest are somewhat constrained by the size of the voxels. Rs-fcMRI uses voxels that are only a few millimeters cubed, so the brain regions will have to be defined on a larger scale. Two of the statistical methods that are commonly applied to network analysis can work on the single voxel spatial scale, but graph theory methods are extremely sensitive to the way nodes are defined.

Brain regions can be divided according to their cellular architecture, according to their connectivity, or according to physiological measures. Alternatively, one could take a "theory-neutral" approach, and randomly divide the cortex into partitions with an arbitrary size.

As mentioned earlier, there are several approaches to network analysis once the brain regions have been defined. Seed-based analysis begins with an a priori defined seed region and finds all of the regions that are functionally connected to that region. Wig et al. caution that the resulting network structure will not give any information concerning the inter-connectivity of the identified regions or the relations of those regions to regions other than the seed region.

Another approach is to use independent component analysis (ICA) to create spatio-temporal component maps, and the components are sorted into those that carry information of interest and those that are caused by noise. Wigs et al. once again warns that inference of functional brain region communities is difficult under ICA. ICA also has the issue of imposing orthogonality on the data.

Graph theory uses a matrix to characterize covariance between regions, which is then transformed into a network map. The problem with graph theory analysis is that network mapping is heavily influenced by a priori brain region and connectivity (nodes and edges). This places the researcher at risk of cherry-picking regions and connections according to their own preconceived theories. However, graph theory analysis is still considered extremely valuable, as it is the only method that gives pair-wise relationships between nodes.

While ICA may have an advantage in being a fairly principled method, it seems that using both methods will be important to better understanding the network connectivity of the brain. Mumford et al. hoped to avoid these issues and use a principled approach that could determine pair-wise relationships using a statistical technique adopted from analysis of gene co-expression networks.

Dissociation in cognitive neuropsychology

Cognitive neuropsychology studies brain damaged patients and uses the patterns of selective impairment in order to make inferences on the underlying cognitive structure. Dissociation between cognitive functions is taken to be evidence that these functions are independent. Theorists have identified several key assumptions that are needed to justify these inferences:

  1. Functional modularity – the mind is organized into functionally separate cognitive modules.
  2. Anatomical modularity – the brain is organized into functionally separate modules. This assumption is very similar to the assumption of functional localization. These assumptions differ from the assumption of functional modularity, because it is possible to have separable cognitive modules that are implemented by diffuse patterns of brain activation.
  3. Universality – The basic organization of functional and anatomical modularity is the same for all normal humans. This assumption is needed if we are to make any claim about functional organization based on dissociation that extrapolates from the instance of a case study to the population.
  4. Transparency / Subtractivity – the mind does not undergo substantial reorganization following brain damage. It is possible to remove one functional module without significantly altering the overall structure of the system. This assumption is necessary in order to justify using brain damaged patients in order to make inferences about the cognitive architecture of healthy people.

There are three principal types of evidence in cognitive neuropsychology: association, single dissociation and double dissociation. Association inferences observe that certain deficits are likely to co-occur. For example, there are many cases who have deficits in both abstract and concrete word comprehension following brain damage. Association studies are considered the weakest form of evidence, because the results could be accounted for by damage to neighboring brain regions and not damage to a single cognitive system. Single Dissociation inferences observe that one cognitive faculty can be spared while another can be damaged following brain damage. This pattern indicates that a) the two tasks employ different cognitive systems b) the two tasks occupy the same system and the damaged task is downstream from the spared task or c) that the spared task requires fewer cognitive resources than the damaged task. The "gold standard" for cognitive neuropsychology is the double dissociation. Double dissociation occurs when brain damage impairs task A in Patient1 but spares task B and brain damage spares task A in Patient 2 but damages task B. It is assumed that one instance of double dissociation is sufficient proof to infer separate cognitive modules in the performance of the tasks.

Many theorists criticize cognitive neuropsychology for its dependence on double dissociations. In one widely cited study, Joula and Plunkett used a model connectionist system to demonstrate that double dissociation behavioral patterns can occur through random lesions of a single module. They created a multilayer connectionist system trained to pronounce words. They repeatedly simulated random destruction of nodes and connections in the system and plotted the resulting performance on a scatter plot. The results showed deficits in irregular noun pronunciation with spared regular verb pronunciation in some cases and deficits in regular verb pronunciation with spared irregular noun pronunciation. These results suggest that a single instance of double dissociation is insufficient to justify inference to multiple systems.

Charter offers a theoretical case in which double dissociation logic can be faulty. If two tasks, task A and task B, use almost all of the same systems but differ by one mutually exclusive module apiece, then the selective lesioning of those two modules would seem to indicate that A and B use different systems. Charter uses the example of someone who is allergic to peanuts but not shrimp and someone who is allergic to shrimp and not peanuts. He argues that double dissociation logic leads one to infer that peanuts and shrimp are digested by different systems. John Dunn offers another objection to double dissociation. He claims that it is easy to demonstrate the existence of a true deficit but difficult to show that another function is truly spared. As more data is accumulated, the value of your results will converge on an effect size of zero, but there will always be a positive value greater than zero that has more statistical power than zero. Therefore, it is impossible to be fully confident that a given double dissociation actually exists.

On a different note, Alphonso Caramazza has given a principled reason for rejecting the use of group studies in cognitive neuropsychology. Studies of brain damaged patients can either take the form of a single case study, in which an individual's behavior is characterized and used as evidence, or group studies, in which a group of patients displaying the same deficit have their behavior characterized and averaged. In order to justify grouping a set of patient data together, the researcher must know that the group is homogenous, that their behavior is equivalent in every theoretically meaningful way. In brain damaged patients, this can only be accomplished a posteriori by analyzing the behavior patterns of all the individuals in the group. Thus according to Caramazza, any group study is either the equivalent of a set of single case studies or is theoretically unjustified. Newcombe and Marshall pointed out that there are some cases (they use Geschwind's syndrome as an example) and that group studies might still serve as a useful heuristic in cognitive neuropsychological studies.

Single-unit recordings

It is commonly understood in neuroscience that information is encoded in the brain by the firing patterns of neurons. Many of the philosophical questions surrounding the neural code are related to questions about representation and computation that are discussed below. There are other methodological questions including whether neurons represent information through an average firing rate or whether there is information represented by the temporal dynamics. There are similar questions about whether neurons represent information individually or as a population.

Computational neuroscience

Many of the philosophical controversies surrounding computational neuroscience involve the role of simulation and modeling as explanation. Carl Craver has been especially vocal about such interpretations. Jones and Love wrote an especially critical article targeted at Bayesian behavioral modeling that did not constrain the modeling parameters by psychological or neurological considerations Eric Winsberg has written about the role of computer modeling and simulation in science generally, but his characterization is applicable to computational neuroscience.

Computation and representation in the brain

The computational theory of mind has been widespread in neuroscience since the cognitive revolution in the 1960s. This section will begin with a historical overview of computational neuroscience and then discuss various competing theories and controversies within the field.

Historical overview

Computational neuroscience began in the 1930s and 1940s with two groups of researchers. The first group consisted of Alan Turing, Alonzo Church and John von Neumann, who were working to develop computing machines and the mathematical underpinnings of computer science. This work culminated in the theoretical development of so-called Turing machines and the Church–Turing thesis, which formalized the mathematics underlying computability theory. The second group consisted of Warren McCulloch and Walter Pitts who were working to develop the first artificial neural networks. McCulloch and Pitts were the first to hypothesize that neurons could be used to implement a logical calculus that could explain cognition. They used their toy neurons to develop logic gates that could make computations. However these developments failed to take hold in the psychological sciences and neuroscience until the mid-1950s and 1960s. Behaviorism had dominated the psychology until the 1950s when new developments in a variety of fields overturned behaviorist theory in favor of a cognitive theory. From the beginning of the cognitive revolution, computational theory played a major role in theoretical developments. Minsky and McCarthy's work in artificial intelligence, Newell and Simon's computer simulations, and Noam Chomsky's importation of information theory into linguistics were all heavily reliant on computational assumptions. By the early 1960s, Hilary Putnam was arguing in favor of machine functionalism in which the brain instantiated Turing machines. By this point computational theories were firmly fixed in psychology and neuroscience. By the mid-1980s, a group of researchers began using multilayer feed-forward analog neural networks that could be trained to perform a variety of tasks. The work by researchers like Sejnowski, Rosenberg, Rumelhart, and McClelland were labeled as connectionism, and the discipline has continued since then. The connectionist mindset was embraced by Paul and Patricia Churchland who then developed their "state space semantics" using concepts from connectionist theory. Connectionism was also condemned by researchers such as Fodor, Pylyshyn, and Pinker. The tension between the connectionists and the classicists is still being debated today.

Representation

One of the reasons that computational theories are appealing is that computers have the ability to manipulate representations to give meaningful output. Digital computers use strings of 1s and 0s in order to represent the content. Most cognitive scientists posit that the brain uses some form of representational code that is carried in the firing patterns of neurons. Computational accounts seem to offer an easy way of explaining how human brains carry and manipulate the perceptions, thoughts, feelings, and actions of individuals. While most theorists maintain that representation is an important part of cognition, the exact nature of that representation is highly debated. The two main arguments come from advocates of symbolic representations and advocates of associationist representations.

Symbolic representational accounts have been famously championed by Fodor and Pinker. Symbolic representation means that the objects are represented by symbols and are processed through rule governed manipulations that are sensation to the constitutive structure. The fact that symbolic representation is sensitive to the structure of the representations is a major part of its appeal. Fodor proposed the language of thought hypothesis, in which mental representations are manipulated in the same way that language is syntactically manipulated in order to produce thought. According to Fodor, the language of thought hypothesis explains the systematicity and productivity seen in both language and thought.

Associativist representations are most often described with connectionist systems. In connectionist systems, representations are distributed across all the nodes and connection weights of the system and thus are said to be sub symbolic. A connectionist system is capable of implementing a symbolic system. There are several important aspects of neural nets that suggest that distributed parallel processing provides a better basis for cognitive functions than symbolic processing. Firstly, the inspiration for these systems came from the brain itself indicating biological relevance. Secondly, these systems are capable of storing content addressable memory, which is far more efficient than memory searches in symbolic systems. Thirdly, neural nets are resilient to damage while even minor damage can disable a symbolic system. Lastly, soft constraints and generalization when processing novel stimuli allow nets to behave more flexibly than symbolic systems.

The Churchlands described representation in a connectionist system in terms of state space. The content of the system is represented by an n-dimensional vector where the n= the number of nodes in the system and the direction of the vector is determined by the activation pattern of the nodes. Fodor rejected this method of representation on the grounds that two different connectionist systems could not have the same content. Further mathematical analysis of connectionist system revealed that connectionist systems that could contain similar content could be mapped graphically to reveal clusters of nodes that were important to representing the content. However, state space vector comparison was not amenable to this type of analysis. Recently, Nicholas Shea has offered his own account for content within connectionist systems that employs the concepts developed through cluster analysis.

Views on computation

Computationalism, a kind of functionalist philosophy of mind, is committed to the position that the brain is some sort of computer, but what does it mean to be a computer? The definition of a computation must be narrow enough so that we limit the number of objects that can be called computers. For example, it might seem problematic to have a definition wide enough to allow stomachs and weather systems to be involved in computations. However, it is also necessary to have a definition broad enough to allow all of the wide varieties of computational systems to compute. For example, if the definition of computation is limited to syntactic manipulation of symbolic representations, then most connectionist systems would not be able to compute. Rick Grush distinguishes between computation as a tool for simulation and computation as a theoretical stance in cognitive neuroscience. For the former, anything that can be computationally modeled counts as computing. In the latter case, the brain is a computing function that is distinct from systems like fluid dynamic systems and the planetary orbits in this regard. The challenge for any computational definition is to keep the two senses distinct.

Alternatively, some theorists choose to accept a narrow or wide definition for theoretical reasons. Pancomputationalism is the position that everything can be said to compute. This view has been criticized by Piccinini on the grounds that such a definition makes computation trivial to the point where it is robbed of its explanatory value.

The simplest definition of computations is that a system can be said to be computing when a computational description can be mapped onto the physical description. This is an extremely broad definition of computation and it ends up endorsing a form of pancomputationalism. Putnam and Searle, who are often credited with this view, maintain that computation is observer-related. In other words, if you want to view a system as computing then you can say that it is computing. Piccinini points out that, in this view, not only is everything computing, but also everything is computing in an indefinite number of ways. Since it is possible to apply an indefinite number of computational descriptions to a given system, the system ends up computing an indefinite number of tasks.

The most common view of computation is the semantic account of computation. Semantic approaches use a similar notion of computation as the mapping approaches with the added constraint that the system must manipulate representations with semantic content. Note from the earlier discussion of representation that both the Churchlands' connectionist systems and Fodor's symbolic systems use this notion of computation. In fact, Fodor is famously credited as saying "No computation without representation". Computational states can be individuated by an externalized appeal to content in a broad sense (i.e. the object in the external world) or by internalist appeal to the narrow sense content (content defined by the properties of the system). In order to fix the content of the representation, it is often necessary to appeal to the information contained within the system. Grush provides a criticism of the semantic account. He points out that appeal to the informational content of a system to demonstrate representation by the system. He uses his coffee cup as an example of a system that contains information, such as the heat conductance of the coffee cup and the time since the coffee was poured, but is too mundane to compute in any robust sense. Semantic computationalists try to escape this criticism by appealing to the evolutionary history of system. This is called the biosemantic account. Grush uses the example of his feet, saying that by this account his feet would not be computing the amount of food he had eaten because their structure had not been evolutionarily selected for that purpose. Grush replies to the appeal to biosemantics with a thought experiment. Imagine that lightning strikes a swamp somewhere and creates an exact copy of you. According to the biosemantic account, this swamp-you would be incapable of computation because there is no evolutionary history with which to justify assigning representational content. The idea that for two physically identical structures one can be said to be computing while the other is not should be disturbing to any physicalist.

There are also syntactic or structural accounts for computation. These accounts do not need to rely on representation. However, it is possible to use both structure and representation as constrains on computational mapping. Oron Shagrir identifies several philosophers of neuroscience who espouse structural accounts. According to him, Fodor and Pylyshyn require some sort of syntactic constraint on their theory of computation. This is consistent with their rejection of connectionist systems on the grounds of systematicity. He also identifies Piccinini as a structuralist quoting his 2008 paper: "the generation of output strings of digits from input strings of digits in accordance with a general rule that depends on the properties of the strings and (possibly) on the internal state of the system". Though Piccinini undoubtedly espouses structuralist views in that paper, he claims that mechanistic accounts of computation avoid reference to either syntax or representation. It is possible that Piccinini thinks that there are differences between syntactic and structural accounts of computation that Shagrir does not respect.

In his view of mechanistic computation, Piccinini asserts that functional mechanisms process vehicles in a manner sensitive to the differences between different portions of the vehicle, and thus can be said to generically compute. He claims that these vehicles are medium-independent, meaning that the mapping function will be the same regardless of the physical implementation. Computing systems can be differentiated based upon the vehicle structure and the mechanistic perspective can account for errors in computation.

Dynamical systems theory presents itself as an alternative to computational explanations of cognition. These theories are staunchly anti-computational and anti-representational. Dynamical systems are defined as systems that change over time in accordance with a mathematical equation. Dynamical systems theory claims that human cognition is a dynamical model in the same sense computationalists claim that the human mind is a computer. A common objection leveled at dynamical systems theory is that dynamical systems are computable and therefore a subset of computationalism. Van Gelder is quick to point out that there is a big difference between being a computer and being computable. Making the definition of computing wide enough to incorporate dynamical models would effectively embrace pancomputationalism.

Explosion

From Wikipedia, the free encyclopedia
Explosion of unserviceable ammunition and other military items
The explosion of the Castle Bravo nuclear bomb.

An explosion is a rapid expansion in volume of a given amount of matter associated with an extreme outward release of energy, usually with the generation of high temperatures and release of high-pressure gases. Explosions may also be generated by a slower expansion that would normally not be forceful, but is not allowed to expand, so that when whatever is containing the expansion is broken by the pressure that builds as the matter inside tries to expand, the matter expands forcefully. An example of this is a volcanic eruption created by the expansion of magma in a magma chamber as it rises to the surface. Supersonic explosions created by high explosives are known as detonations and travel through shock waves. Subsonic explosions are created by low explosives through a slower combustion process known as deflagration.

Causes

For an explosion to occur, there must be a rapid, forceful expansion of matter. There are numerous ways this can happen, both naturally and artificially, such as volcanic eruptions, or two objects striking each other at very high speeds, as in an impact event. Explosive volcanic eruptions occur when magma rises from below, it has dissolved gas in it. The reduction of pressure as the magma rises causes the gas to bubble out of solution, resulting in a rapid increase in volume, however the size of the magma chamber remains the same. This results in pressure buildup that eventually leads to an explosive eruption. Explosions can also occur outside of Earth in the universe in events such as supernovae, or, more commonly, stellar flares. Humans are also able to create explosions through the use of explosives, or through nuclear fission or fusion, as in a nuclear weapon. Explosions frequently occur during bushfires in eucalyptus forests where the volatile oils in the tree tops suddenly combust.

Astronomical

The nebula M1-67 around Wolf–Rayet star WR 124 is the remnants of a stellar explosion, which is currently observed as six light years across

Among the largest known explosions in the universe are supernovae, which occur after the end of life of some types of stars. Solar flares are an example of common, much less energetic, explosions on the Sun, and presumably on most other stars as well. The energy source for solar flare activity comes from the tangling of magnetic field lines resulting from the rotation of the Sun's conductive plasma. Another type of large astronomical explosion occurs when a meteoroid or an asteroid impacts the surface of another object, or explodes in its atmosphere, such as a planet. This occurs because the two objects are moving at very high speed relative to each other (a minimum of 11.2 kilometres per second (7.0 mi/s) for an Earth impacting body). For example, the Tunguska event of 1908 is believed to have resulted from a meteor air burst.

Black hole mergers, likely involving binary black hole systems, are capable of radiating many solar masses of energy into the universe in a fraction of a second, in the form of a gravitational wave. This is capable of transmitting ordinary energy and destructive forces to nearby objects, but in the vastness of space, nearby objects are rare. The gravitational wave observed on 21 May 2019, known as GW190521, produced a merger signal of about 100 ms duration, during which time is it estimated to have radiated away nine solar masses in the form of gravitational energy.

Chemical

The most common artificial explosives are chemical explosives, usually involving a rapid and violent oxidation reaction that produces large amounts of hot gas. Gunpowder was the first explosive to be invented and put to use. Other notable early developments in chemical explosive technology were Frederick Augustus Abel's development of nitrocellulose in 1865 and Alfred Nobel's invention of dynamite in 1866. Chemical explosions (both intentional and accidental) are often initiated by an electric spark or flame in the presence of oxygen. Accidental explosions may occur in fuel tanks, rocket engines, etc.

Electrical and magnetic

A capacitor that has exploded

A high current electrical fault can create an "electrical explosion" by forming a high-energy electrical arc which rapidly vaporizes metal and insulation material. This arc flash hazard is a danger to people working on energized switchgear. Excessive magnetic pressure within an ultra-strong electromagnet can cause a magnetic explosion.

Mechanical and vapor

Strictly a physical process, as opposed to chemical or nuclear, e.g., the bursting of a sealed or partially sealed container under internal pressure is often referred to as an explosion. Examples include an overheated boiler or a simple tin can of beans tossed into a fire.

Boiling liquid expanding vapor explosions are one type of mechanical explosion that can occur when a vessel containing a pressurized liquid is ruptured, causing a rapid increase in volume as the liquid evaporates. Note that the contents of the container may cause a subsequent chemical explosion, the effects of which can be dramatically more serious, such as a propane tank in the midst of a fire. In such a case, to the effects of the mechanical explosion when the tank fails are added the effects from the explosion resulting from the released (initially liquid and then almost instantaneously gaseous) propane in the presence of an ignition source. For this reason, emergency workers often differentiate between the two events.

Nuclear

In addition to stellar nuclear explosions, a nuclear weapon is a type of explosive weapon that derives its destructive force from nuclear fission or from a combination of fission and fusion. As a result, even a nuclear weapon with a small yield is significantly more powerful than the largest conventional explosives available, with a single weapon capable of completely destroying an entire city.

Properties

Force

A breaching charge exploding against a test door during training
The effects of a large explosion.

Explosive force is released in a direction perpendicular to the surface of the explosive. If a grenade is in mid air during the explosion, the direction of the blast will be 360°. In contrast, in a shaped charge the explosive forces are focused to produce a greater local explosion; shaped charges are often used by military to breach doors or walls.

Velocity

The speed of the reaction is what distinguishes an explosive reaction from an ordinary combustion reaction. Unless the reaction occurs very rapidly, the thermally expanding gases will be moderately dissipated in the medium, with no large differential in pressure and no explosion. As a wood fire burns in a fireplace, for example, there certainly is the evolution of heat and the formation of gases, but neither is liberated rapidly enough to build up a sudden substantial pressure differential and then cause an explosion. This can be likened to the difference between the energy discharge of a battery, which is slow, and that of a flash capacitor like that in a camera flash, which releases its energy all at once.

Evolution of heat

The generation of heat in large quantities accompanies most explosive chemical reactions. The exceptions are called entropic explosives and include organic peroxides such as acetone peroxide. It is the rapid liberation of heat that causes the gaseous products of most explosive reactions to expand and generate high pressures. This rapid generation of high pressures of the released gas constitutes the explosion. The liberation of heat with insufficient rapidity will not cause an explosion. For example, although a unit mass of coal yields five times as much heat as a unit mass of nitroglycerin, the coal cannot be used as an explosive (except in the form of coal dust) because the rate at which it yields this heat is quite slow. In fact, a substance that burns less rapidly (i.e. slow combustion) may actually evolve more total heat than an explosive that detonates rapidly (i.e. fast combustion). In the former, slow combustion converts more of the internal energy (i.e. chemical potential) of the burning substance into heat released to the surroundings, while in the latter, fast combustion (i.e. detonation) instead converts more internal energy into work on the surroundings (i.e. less internal energy converted into heat); c.f. heat and work (thermodynamics) are equivalent forms of energy. See Heat of Combustion for a more thorough treatment of this topic.

When a chemical compound is formed from its constituents, heat may either be absorbed or released. The quantity of heat absorbed or given off during transformation is called the heat of formation. Heats of formations for solids and gases found in explosive reactions have been determined for a temperature of 25 °C and atmospheric pressure, and are normally given in units of kilojoules per gram-molecule. A positive value indicates that heat is absorbed during the formation of the compound from its elements; such a reaction is called an endothermic reaction. In explosive technology only materials that are exothermic—that have a net liberation of heat and have a negative heat of formation—are of interest. Reaction heat is measured under conditions either of constant pressure or constant volume. It is this heat of reaction that may be properly expressed as the "heat of explosion."

Initiation of reaction

A chemical explosive is a compound or mixture which, upon the application of heat or shock, decomposes or rearranges with extreme rapidity, yielding much gas and heat. Many substances not ordinarily classed as explosives may do one, or even two, of these things.

A reaction must be capable of being initiated by the application of shock, heat, or a catalyst (in the case of some explosive chemical reactions) to a small portion of the mass of the explosive material. A material in which the first three factors exist cannot be accepted as an explosive unless the reaction can be made to occur when needed.

Fragmentation

Fragmentation is the accumulation and projection of particles as the result of a high explosives detonation. Fragments could originate from: parts of a structure (such as glass, bits of structural material, or roofing material), revealed strata and/or various surface-level geologic features (such as loose rocks, soil, or sand), the casing surrounding the explosive, and/or any other loose miscellaneous items not vaporized by the shock wave from the explosion. High velocity, low angle fragments can travel hundreds of metres with enough energy to initiate other surrounding high explosive items, injure or kill personnel, and/or damage vehicles or structures.

Future History (Heinlein)

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Future_History_(Heinlein)
Universe was a 1941 story from Heinlein's Future History series (shown here in the 1951 Dell edition).

The Future History is a series of stories created by Robert A. Heinlein. It describes a projected future of the human race from the middle of the 20th century through the early 23rd century. The term Future History was coined by John W. Campbell Jr. in the February 1941 issue of Astounding Science Fiction. Campbell published an early draft of Heinlein's chart of the series in the May 1941 issue.

Heinlein wrote most of the Future History stories early in his career, between 1939 and 1941 and between 1945 and 1950. Most of the Future History stories written prior to 1967 are collected in The Past Through Tomorrow, which also contains the final version of the chart. That collection does not include Universe and Common Sense; they were published separately as Orphans of the Sky.

Groff Conklin called the Future History "the greatest of all histories of tomorrow". It was nominated for the Hugo Award for Best All-Time Series in 1966, along with the Barsoom series by Edgar Rice Burroughs, the Lensman series by E. E. Smith, the Foundation series by Isaac Asimov, and The Lord of the Rings series by J. R. R. Tolkien, but lost to Asimov's Foundation series.

Definition

For the most part, The Past Through Tomorrow defines a core group of stories that are clearly within the Future History series. However, Heinlein scholars generally agree that some stories not included in the anthology belong to the Future History series, and that some that are included are only weakly linked to it.

James Gifford adds Time Enough for Love, which was published after The Past Through Tomorrow, and also "Let There Be Light", which was not included in The Past Through Tomorrow, possibly because the collection editor disliked it or because Heinlein himself considered it to be inferior. However, he considers Time Enough for Love to be a borderline case. He considers The Number of the Beast, The Cat Who Walks Through Walls, and To Sail Beyond the Sunset to be too weakly linked to the Future History to be included.

Bill Patterson includes To Sail Beyond the Sunset, on the theory that the discrepancies between it and the rest of the Future History are explained by assigning it to the same "bundle of related timelines" in the "World as Myth" multiverse. However, he lists a number of stories that he believes were never really intended to be part of Future History, even though they were included in The Past Through Tomorrow: "Life-Line" (which was written before Heinlein published the Future History chart; however, Lazarus Long does reference the protagonist of "Life-Line" and his device in Time Enough for Love), "The Menace from Earth", "—We Also Walk Dogs", and the stories originally published in the Saturday Evening Post ("Space Jockey", "It's Great to Be Back!", "The Green Hills of Earth", and "The Black Pits of Luna"). He agrees with Gifford that "Let There Be Light" should be included. The story "—And He Built a Crooked House—" was included only in the pre-war chart and never since.

The Heinlein juveniles do not hew closely to the Future History outline. Gifford states that "Although the twelve juvenile novels are not completely inconsistent with the Future History, neither do they form a thorough match with that series for adult readers. It is not often recognized that they are a reasonably consistent 'Future History' of their own... At least one major story specified in the Future History chart, the revolution on Venus, ended up being told in the framework of the juveniles as Between Planets." The novel Variable Star, written by Spider Robinson from Heinlein's detailed outline, incorporates some elements of both the Future History (such as references to Nehemiah Scudder) and the universe of the Heinlein juveniles (for example, torch ships and faster-than-light telepathic communication between twins). The adult short story "The Long Watch", included in Future History story collections, connects to Space Cadet through the character of (John) Ezra Dahlquist, the central character of the first, memorialized in the second.

Patterson cites "World as Myth" as a way of accounting for the deviation of real history from Heinlein's imagined future as well as inconsistencies between stories, writing, "Heinlein in the World as Myth books redefined the Future History as a timeline (or bundle of related timelines) ... which allows the 'Future History' to be a hard-edged term and yet nevertheless contain inconsistencies (i.e., any inconsistency belongs to a closely-related timeline)."

List of stories

Title Original Publishing Date Published In Century Timeline order Collected In*
Life-Line 1939-08 Astounding Science-Fiction 20th 1 The Man Who Sold the Moon (1950)
Let There Be Light 1940-05 Super Science Stories 20th 2 The Man Who Sold the Moon (1950)†
The Roads Must Roll 1940-06 Astounding Science-Fiction 20th 3 The Man Who Sold the Moon (1950)
Blowups Happen 1940-09 Astounding Science-Fiction 20th 4 The Man Who Sold the Moon (1950)
The Man Who Sold the Moon 1950 Collection The Man Who Sold the Moon (1950) 20th 5 The Man Who Sold the Moon (1950)
Delilah and the Space-Rigger 1949-12 The Blue Book Magazine 20th 6 The Green Hills of Earth (1951)
Space Jockey 1947-04-26 The Saturday Evening Post 20th 7 The Green Hills of Earth (1951)
Requiem 1940-01 Astounding Science-Fiction 20th 8 The Man Who Sold the Moon (1950)
The Long Watch 1949-12 The American Legion Magazine 20th 9 The Green Hills of Earth (1951)
Gentlemen, Be Seated! 1948-05 Argosy 20th 10 The Green Hills of Earth (1951)
The Black Pits of Luna 1947-01-10 The Saturday Evening Post 20th 11 The Green Hills of Earth (1951)
It's Great to Be Back! 1947-07-26 The Saturday Evening Post 20th 12 The Green Hills of Earth (1951)
"'—We Also Walk Dogs'" 1941-07 Astounding Science-Fiction 20th 13 The Green Hills of Earth (1951)
Searchlight 1962-08 Scientific American 20th 14
Ordeal in Space 1948-05 Town & Country 21st 15 The Green Hills of Earth (1951)
The Green Hills of Earth 1947-02-08 The Saturday Evening Post 21st 16 The Green Hills of Earth (1951)
Logic of Empire 1941-03 Astounding Science-Fiction 21st 17 The Green Hills of Earth (1951)
The Menace from Earth 1957-08 The Magazine of Fantasy and Science Fiction 21st 18
"'If This Goes On—'" 1940-02 Astounding Science-Fiction 21st 19 Revolt in 2100 (1953)
Coventry 1940-07 Astounding Science-Fiction 21st 20 Revolt in 2100 (1953)
Misfit 1939-11 Astounding Science-Fiction 22nd 21 Revolt in 2100 (1953)
Methuselah's Children 1941-07 Astounding Science-Fiction 22nd 22
Universe 1941-05 Astounding Science-Fiction 36th 23 Orphans of the Sky (1950)†
Common Sense 1940-10 Astounding Science-Fiction 36th 24 Orphans of the Sky (1950)†
Time Enough for Love 1973 Standalone novel 43rd 25
To Sail Beyond the Sunset 1987 Standalone novel 43rd 26

*All stories are also collected in The Past Through Tomorrow (1967) unless marked †

Stories never written

The chart published in the collection Revolt in 2100 includes several unwritten stories, which Heinlein describes in a postscript. "Fire Down Below", about a revolution in Antarctica, would have been set in the early 21st century. Three more unwritten stories fill in the history from just before "Logic of Empire" in the early 21st century through the beginning of "If This Goes On—". "The Sound of His Wings" covers Nehemiah Scudder's early life as a television evangelist through his rise to power as the First Prophet. "Eclipse" describes independence movements on Mars and Venus. "The Stone Pillow" details the rise of the resistance movement from the early days of the theocracy through the beginning of "If This Goes On—".

These stories were key points in the Future History, so Heinlein gave a rough description of Nehemiah Scudder which made his reign easy to visualize—a combination of John Calvin, Girolamo Savonarola, Joseph Franklin Rutherford, and Huey Long. His rise to power began when one of his flock, the widow of a wealthy man who would have disapproved of Scudder, died and left him enough money to establish a television station. He then teamed up with an ex-Senator and hired a major advertising agency. He was soon famous even off-world—many bonded laborers on Venus saw him as a messianic figure. He had muscle as well—a re-creation of the Ku Klux Klan in everything but name. "Blood at the polls and blood in the streets, but Scudder won the election. The next election was never held." Though this period was integral to the human diaspora that would follow several hundred years later, Heinlein stated that he was never able to write them because they featured Scudder prominently; he "dislike(d) him too much".

Nehemiah Scudder already appears in Heinlein's earliest novel For Us, the Living: A Comedy of Customs (written 1938–1939, though first published in 2003). Scudder's early career as depicted in that book is virtually identical with the above—but with the crucial difference that in the earlier version Scudder is stopped at the last moment by the counter-mobilization of Libertarians, and despite mass voter intimidation carries only Tennessee and Alabama. In fact, the Libertarian regime seen in full bloom in that book's 2086 came into being in direct reaction to Scudder's attempt to impose puritanical mores on the entire American society.

Solipsism

From Wikipedia, the free encyclopedia

Varieties

There are varying degrees of solipsism that parallel the varying degrees of skepticism:

Metaphysical

Metaphysical solipsism is a variety of solipsism based on a philosophy of subjective idealism. Metaphysical solipsists maintain that the self is the only existing reality and that all other realities, including the external world and other persons, are representations of that self, having no independent existence. There are several versions of metaphysical solipsism, such as Caspar Hare's egocentric presentism (or perspectival realism), in which other people are conscious, but their experiences are simply not present.

Epistemological

Epistemological solipsism is the variety of idealism according to which only the directly accessible mental contents of the solipsistic philosopher can be known. The existence of an external world is regarded as an unresolvable question rather than actually false. Further, one cannot also be certain as to what extent the external world exists independently of one's mind. For instance, it may be that a God-like being controls the sensations received by the mind, making it appear as if there is an external world when most of it (excluding the God-like being and oneself) is false. However, the point remains that epistemological solipsists consider this an "unresolvable" question.

Methodological

Methodological solipsism is an agnostic variant of solipsism. It exists in opposition to the strict epistemological requirements for "knowledge" (e.g. the requirement that knowledge must be certain). It still entertains the points that any induction is fallible. Methodological solipsism sometimes goes even further to say that even what we perceive as the brain is actually part of the external world, for it is only through our senses that we can see or feel the mind. Only the existence of thoughts is known for certain.

Methodological solipsists do not intend to conclude that the stronger forms of solipsism are actually true. They simply emphasize that justifications of an external world must be founded on indisputable facts about their own consciousness. The methodological solipsist believes that subjective impressions (empiricism) or innate knowledge (rationalism) are the sole possible or proper starting point for philosophical construction. Often methodological solipsism is not held as a belief system, but rather used as a thought experiment to assist skepticism (e.g. René Descartes' Cartesian skepticism).

Main points

Mere denial of material existence, in itself, does not necessarily constitute solipsism.

Philosophers generally try to build knowledge on more than an inference or analogy. Well-known frameworks such as Descartes' epistemological enterprise brought to popularity the idea that all certain knowledge may go no further than "I think; therefore I exist." However, Descartes' view does not provide any details about the nature of the "I" that has been proven to exist.

The theory of solipsism also merits close examination because it relates to three widely held philosophical presuppositions, each itself fundamental and wide-ranging in importance:

  • One's most certain knowledge is the content of one's own mind—my thoughts, experiences, affects, etc.
  • There is no conceptual or logically necessary link between mental and physical—between, for example, the occurrence of certain conscious experience or mental states and the "possession" and behavioral dispositions of a "body" of a particular kind.
  • The experience of a given person is necessarily private to that person.

To expand on the second point, the conceptual problem is that the previous point assumes mind or consciousness (which are attributes) can exist independent of some entity having this attribute (a capability in this case), i.e., that an attribute of an existent can exist apart from the existent itself. If one admits to the existence of an independent entity (e.g., the brain) having that attribute, the door is open to an independent reality. (See Brain in a vat)

Some philosophers hold that, while it cannot be proven that anything independent of one's mind exists, the point that solipsism makes is irrelevant. This is because, whether the world as we perceive it exists independently or not, we cannot escape this perception, hence it is best to act assuming that the world is independent of our minds. (See Falsifiability and testability below)

History

Origins of solipsist thought are found in Greece and later Enlightenment thinkers such as Thomas Hobbes and Descartes.

Gorgias

Solipsism was first recorded by the Greek presocratic sophist, Gorgias (c. 483–375 BC), who is quoted by the Roman sceptic Sextus Empiricus as having stated:

  • Nothing exists.
  • Even if something exists, nothing can be known about it.
  • Even if something could be known about it, knowledge about it cannot be communicated to others.

Much of the point of the sophists was to show that objective knowledge was a literal impossibility.

René Descartes

The foundations of solipsism are in turn the foundations of the view that the individual's understanding of any and all psychological concepts (thinking, willing, perceiving, etc.) is accomplished by making an analogy with their own mental states; i.e., by abstraction from inner experience. And this view, or some variant of it, has been influential in philosophy since René Descartes elevated the search for incontrovertible certainty to the status of the primary goal of epistemology, whilst also elevating epistemology to "first philosophy".

Berkeley

Portrait of George Berkeley by John Smybert, 1727

George Berkeley's arguments against materialism in favour of idealism provide the solipsist with a number of arguments not found in Descartes. While Descartes defends ontological dualism, thus accepting the existence of a material world (res extensa) as well as immaterial minds (res cogitans) and God, Berkeley denies the existence of matter but not minds, of which God is one.

Relation to other ideas

Idealism and materialism

One of the most fundamental debates in philosophy concerns the "true" nature of the world—whether it is some ethereal plane of ideas or a reality of atomic particles and energy. Materialism posits a real "world out there", as well as in and through us, that can be sensed—seen, heard, tasted, touched and felt, sometimes with prosthetic technologies corresponding to human sensing organs. (Materialists do not claim that human senses or even their prosthetics can, even when collected, sense the totality of the universe; simply that they collectively cannot sense what cannot in any way be known to us.) Materialists do not find this a useful way of thinking about the ontology and ontogeny of ideas, but we might say that from a materialist perspective pushed to a logical extreme communicable to an idealist, ideas are ultimately reducible to a physically communicated, organically, socially and environmentally embedded 'brain state'. While reflexive existence is not considered by materialists to be experienced on the atomic level, the individual's physical and mental experiences are ultimately reducible to the unique tripartite combination of environmentally determined, genetically determined, and randomly determined interactions of firing neurons and atomic collisions.

For materialists, ideas have no primary reality as essences separate from our physical existence. From a materialist perspective, ideas are social (rather than purely biological), and formed and transmitted and modified through the interactions between social organisms and their social and physical environments. This materialist perspective informs scientific methodology, insofar as that methodology assumes that humans have no access to omniscience and that therefore human knowledge is an ongoing, collective enterprise that is best produced via scientific and logical conventions adjusted specifically for material human capacities and limitations.

Modern idealists believe that the mind and its thoughts are the only true things that exist. This is the reverse of what is sometimes called "classical idealism" or, somewhat confusingly, "Platonic idealism" due to the influence of Plato's theory of forms (εἶδος eidos or ἰδέα idea) which were not products of our thinking. The material world is ephemeral, but a perfect triangle or "beauty" is eternal. Religious thinking tends to be some form of idealism, as God usually becomes the highest ideal (such as neoplatonism). On this scale, solipsism can be classed as idealism. Thoughts and concepts are all that exist, and furthermore, only the solipsist's own thoughts and consciousness exist. The so-called "reality" is nothing more than an idea that the solipsist has (perhaps unconsciously) created.

Cartesian dualism

There is another option: the belief that both ideals and "reality" exist. Dualists commonly argue that the distinction between the mind (or 'ideas') and matter can be proven by employing Leibniz's principle of the identity of indiscernibles, which states that if two things share exactly the same qualities, then they must be identical, as in indistinguishable from each other and therefore one and the same thing. Dualists then attempt to identify attributes of mind that are lacked by matter (such as privacy or intentionality) or vice versa (such as having a certain temperature or electrical charge). One notable application of the identity of indiscernibles was by René Descartes in his Meditations on First Philosophy. Descartes concluded that he could not doubt the existence of himself (the famous cogito ergo sum argument), but that he could doubt the (separate) existence of his body. From this, he inferred that the person Descartes must not be identical to the Descartes body since one possessed a characteristic that the other did not: namely, it could be known to exist. Solipsism agrees with Descartes in this aspect, and goes further: only things that can be known to exist for sure should be considered to exist. The Descartes body could only exist as an idea in the mind of the person Descartes. Descartes and dualism aim to prove the actual existence of reality as opposed to a phantom existence (as well as the existence of God in Descartes' case), using the realm of ideas merely as a starting point, but solipsism usually finds those further arguments unconvincing. The solipsist instead proposes that their own unconscious is the author of all seemingly "external" events from "reality".

Philosophy of Schopenhauer

The World as Will and Representation is the central work of Arthur Schopenhauer. Schopenhauer saw the human will as our one window to the world behind the representation, the Kantian thing-in-itself. He believed, therefore, that we could gain knowledge about the thing-in-itself, something Kant said was impossible, since the rest of the relationship between representation and thing-in-itself could be understood by analogy as the relationship between human will and human body.

Idealism

The idealist philosopher George Berkeley argued that physical objects do not exist independently of the mind that perceives them. An item truly exists only as long as it is observed; otherwise, it is not only meaningless but simply nonexistent. Berkeley does attempt to show things can and do exist apart from the human mind and our perception, but only because there is an all-encompassing Mind in which all "ideas" are perceived – in other words, God, who observes all. Solipsism agrees that nothing exists outside of perception, but would argue that Berkeley falls prey to the egocentric predicament – he can only make his own observations, and thus cannot be truly sure that this God or other people exist to observe "reality". The solipsist would say it is better to disregard the unreliable observations of alleged other people and rely upon the immediate certainty of one's own perceptions.

Rationalism

Rationalism is the philosophical position that truth is best discovered by the use of reasoning and logic rather than by the use of the senses (see Plato's theory of forms). Solipsism is also skeptical of sense-data.

Philosophical zombie

The theory of solipsism crosses over with the theory of the philosophical zombie in that other seemingly conscious beings may actually lack true consciousness, instead they only display traits of consciousness to the observer, who may be the only conscious being there is.

Falsifiability and testability

Solipsism is not a falsifiable hypothesis as described by Karl Popper: there does not seem to be an imaginable disproof. According to Popper: a hypothesis that cannot be falsified is not scientific, and a solipsist can observe "the success of sciences" (see also no miracles argument). One critical test is nevertheless to consider the induction from experience that the externally observable world does not seem, at first approach, to be directly manipulable purely by mental energies alone. One can indirectly manipulate the world through the medium of the physical body, but it seems impossible to do so through pure thought (psychokinesis). It might be argued that if the external world were merely a construct of a single consciousness, i.e. the self, it could then follow that the external world should be somehow directly manipulable by that consciousness, and if it is not, then solipsism is false. An argument against this states that this argument is circular and incoherent. It assumes at the beginning a "construct of a single consciousness" meaning something false, and then tries to manipulate the external world that it just assumed was false. Of course this is an impossible task, but it does not disprove solipsism. It is simply poor reasoning when considering pure idealized logic and that is why David Deutsch states that when also other scientific methods are used (not only logic) solipsism is "indefensible", also when using the simplest explanations: "If, according to the simplest explanation, an entity is complex and autonomous, then that entity is real."

The method of the typical scientist is naturalist: they first assume that the external world exists and can be known. But the scientific method, in the sense of a predict-observe-modify loop, does not require the assumption of an external world. A solipsist may perform a psychological test on themselves, to discern the nature of the reality in their mind – however Deutsch uses this fact to counter-argue: "outer parts" of solipsist, behave independently so they are independent for "narrowly" defined (conscious) self. A solipsist's investigations may not be proper science, however, since it would not include the co-operative and communitarian aspects of scientific inquiry that normally serve to diminish bias.

Minimalism

Solipsism is a form of logical minimalism. Many people are intuitively unconvinced of the nonexistence of the external world from the basic arguments of solipsism, but a solid proof of its existence is not available at present. The central assertion of solipsism rests on the nonexistence of such a proof, and strong solipsism (as opposed to weak solipsism) asserts that no such proof can be made. In this sense, solipsism is logically related to agnosticism in religion: the distinction between believing you do not know, and believing you could not have known.

However, minimality (or parsimony) is not the only logical virtue. A common misapprehension of Occam's razor has it that the simpler theory is always the best. In fact, the principle is that the simpler of two theories of equal explanatory power is to be preferred. In other words: additional "entities" can pay their way with enhanced explanatory power. So the naturalist can claim that, while their world view is more complex, it is more satisfying as an explanation.

In infants

Some developmental psychologists believe that infants are solipsistic, and that eventually children infer that others have experiences much like theirs and reject solipsism.

Hinduism

The earliest reference to solipsism is found in the ideas in Hindu philosophy in the Brihadaranyaka Upanishad, dated to early 1st millennium BC. The Upanishad holds the mind to be the only god and all actions in the universe are thought to be a result of the mind assuming infinite forms. After the development of distinct schools of Indian philosophy, Advaita Vedanta and Samkhya schools are thought to have originated concepts similar to solipsism.

Advaita Vedanta

Advaita is one of the six most known Hindu philosophical systems and literally means "non-duality". Its first great consolidator was Adi Shankaracharya, who continued the work of some of the Upanishadic teachers, and that of his teacher's teacher Gaudapada. By using various arguments, such as the analysis of the three states of experience—wakefulness, dream, and deep sleep, he established the singular reality of Brahman, in which Brahman, the universe and the Atman or the Self, were one and the same.

One who sees everything as nothing but the Self, and the Self in everything one sees, such a seer withdraws from nothing. For the enlightened, all that exists is nothing but the Self, so how could any suffering or delusion continue for those who know this oneness?

— Ishopanishad: sloka 6, 7

The concept of the Self in the philosophy of Advaita could be interpreted as solipsism. However, the theological definition of the Self in Advaita protect it from true solipsism as found in the west. Similarly, the Vedantic text Yogavasistha, escapes charge of solipsism because the real "I" is thought to be nothing but the absolute whole looked at through a particular unique point of interest.

It is mentioned in Yoga Vasistha that “…..according to them (we can safely assume that them are present Solipsists) this world is mental in nature. There is no reality other than the ideas of one’s own mind. This view is incorrect, because the world cannot be the content of an individual’s mind. If it were so, an individual would have created and destroyed the world according to his whims. This theory is called atma khyati – the pervasion of the little self (intellect). Yoga Vasistha - Nirvana Prakarana - Uttarardha (Volume - 6) Page 107 by Swami Jyotirmayananda

Samkhya and Yoga

Samkhya philosophy, which is sometimes seen as the basis of Yogic thought, adopts a view that matter exists independently of individual minds. Representation of an object in an individual mind is held to be a mental approximation of the object in the external world. Therefore, Samkhya chooses representational realism over epistemological solipsism. Having established this distinction between the external world and the mind, Samkhya posits the existence of two metaphysical realities Prakriti (matter) and Purusha (consciousness).

Buddhism

Some interpretations of Buddhism assert that external reality is an illusion, and sometimes this position is [mis]understood as metaphysical solipsism. Buddhist philosophy, though, generally holds that the mind and external phenomena are both equally transient, and that they arise from each other. The mind cannot exist without external phenomena, nor can external phenomena exist without the mind. This relation is known as "dependent arising" (pratityasamutpada).

The Buddha stated, "Within this fathom long body is the world, the origin of the world, the cessation of the world and the path leading to the cessation of the world". Whilst not rejecting the occurrence of external phenomena, the Buddha focused on the illusion created within the mind of the perceiver by the process of ascribing permanence to impermanent phenomena, satisfaction to unsatisfying experiences, and a sense of reality to things that were effectively insubstantial.

Mahayana Buddhism also challenges the illusion of the idea that one can experience an 'objective' reality independent of individual perceiving minds.

From the standpoint of Prasangika (a branch of Madhyamaka thought), external objects do exist, but are devoid of any type of inherent identity: "Just as objects of mind do not exist [inherently], mind also does not exist [inherently]". In other words, even though a chair may physically exist, individuals can only experience it through the medium of their own mind, each with their own literal point of view. Therefore, an independent, purely 'objective' reality could never be experienced.

The Yogacara (sometimes translated as "Mind only") school of Buddhist philosophy contends that all human experience is constructed by mind. Some later representatives of one Yogacara subschool (Prajñakaragupta, Ratnakīrti) propounded a form of idealism that has been interpreted as solipsism. A view of this sort is contained in the 11th-century treatise of Ratnakirti, "Refutation of the existence of other minds" (Santanantara dusana), which provides a philosophical refutation of external mind-streams from the Buddhist standpoint of ultimate truth (as distinct from the perspective of everyday reality).

In addition to this, the Bardo Thodol, Tibet's famous book of the dead, repeatedly states that all of reality is a figment of one's perception, although this occurs within the "Bardo" realm (post-mortem). For instance, within the sixth part of the section titled "The Root Verses of the Six Bardos", there appears the following line: "May I recognize whatever appeareth as being mine own thought-forms"; there are many lines in similar ideal.

Criticism

Solipsism as radical subjective idealism has often been criticized by well-known philosophers ("solipsism can only succeed in a madhouse" — A. Schopenhauer, "solipsism is madness" — M. Gardner.)

Bertrand Russell wrote that it was "psychologically impossible" to believe, "I once received a letter from an eminent logician, Mrs. Christine Ladd-Franklin, saying that she was a solipsist, and was surprised that there were no others. Coming from a logician and a solipsist, her surprise surprised me". He also argues that the logic of solipsism compels you to believe in 'solipsism of the moment' where only the presently existing moment can be said to exist.

John Stuart Mill wrote that one can know of others' minds because "First, they have bodies like me, which I know in my own case, to be the antecedent condition of feelings; and because, secondly, they exhibit the acts, and outward signs, which in my own case I know by experience to be caused by feelings".

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...