Search This Blog

Sunday, December 10, 2023

Incompatibilism

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Incompatibilism
Classical incompatibilists agreed that determinism leaves no room for free will.

The term incompatibilism was coined in the 1960s, most likely by philosopher Keith Lehrer, to name the view that the thesis of determinism is logically incompatible with the classical thesis of free will. The term compatibilism was coined (also by Lehrer) to name the view that the classical free will thesis is logically compatible with determinism, i.e. it is possible for an ordinary human to exercise free will (the freedom-relevant ability to do otherwise) even in a universe at which determinism is true. These terms were originally coined for use within a research paradigm that was dominant among academics during the so-called "classical period" from the 1960s to 1980s, or what has been called the "classical analytic paradigm". Within the classical analytic paradigm, the problem of free will and determinism was understood as a Compatibility Question: "Is it possible for an ordinary human to exercise free will (classically defined as an ability to otherwise) when determinism is true?" Those working in the classical analytic paradigm who answered "no" were incompatibilists in the original, classical-analytic sense of the term, now commonly called classical incompatibilists; they proposed that determinism precludes free will because it precludes our ability to do otherwise. Those who answered "yes" were compatibilists in the original sense of the term, now commonly called classical compatibilists. Given that classical free will theorists (i.e. those working in the classical analytic paradigm) agreed that it is at least metaphysically possible for an ordinary human to exercise free will, all classical compatibilists accepted a compossibilist account of free will (i.e. a compossibilist interpretation of the ability to do otherwise) and all classical incompatibilists accepted a libertarian (a.k.a. libertarianist) account of free will (i.e. a libertarian/libertarianist interpretation of the ability to do otherwise).

The classical analytic paradigm has fallen out of favor over the last few decades, largely because philosophers no longer agree that free will is equivalent to some kind of ability to do otherwise; many hold that it is, instead, a type of sourcehood that does not require an ability to do otherwise. The number of philosophers who reject the classical assumption of anthropocentric possibilism, i.e. the view that it is at least metaphysically possible for a human to exercise free will, has also risen in recent years. As philosophers adjusted Lehrer's original (classical) definitions of the terms 'incompatibilism' and 'compatibilism' to reflect their own perspectives on the location of the purported "fundamental divide" among free will theorists, the terms 'incompatibilism' and 'compatibilism' have been given a variety of new meanings. At present, then, there is no standard meaning of the term 'incompatibilism' (or its complement 'compatibilism').

On one recent taxonomy, there are now at least three substantively different, non-classical uses of the term 'incompatibilism', or (if one prefers) three different types of incompatibilism, namely: neo-classical incompatibilism, post-classical incompatibilism (a.k.a. incompossibilism), and anti-classical incompatibilism; correspondingly, there are neo-classical, post-classical (compossibilist), and anti-classical versions of compatibilism as well. Neo-classical incompatibilism is a two-tenet view: (1) incompossibilism is true (i.e. it is metaphysically impossible for an ordinary human to act freely when determinism is true), and (2) determinism-related causal/nomological factors preclude free will (which explains why incompossibilism is true). Correspondingly, neo-classical compatibilism is the two-tenet view that (1) the negative, non-explanatory tenet of neo-classical incompatibilism is false (i.e. compossibilism is true), and (2) the positive, explanatory tenet of neo-classical incompatibilism is false. Anti-classical incompatibilism is the explanatory thesis of neo-classical incompatibilism; anti-classical incompatibilism is neutral on the truth-value of incompossibilism. Correspondingly, anti-classical compatibilism is the negation of neo-classical incompatibilism's positive tenet, i.e. anti-classical compatibilism is the contradictory of anti-classical incompatibilism. Post-classical incompatibilism is just the negative, non-explanatory thesis of neo-classical incompatibilism; this view is neutral on whether the positive, explanatory thesis of neo-classical incompatibilism is truIe. (Put another way, on the post-classical redefinition of 'incompatibilism', it is just an alternative name for incompossibilism, a view which is completely silent on whether determinism-related causal factors are relevant to free will or are a total red herring in discussions of free will.) Correspondingly, post-classical compatibilism is identical to compossibilism (i.e. on the post-classical redefinition of 'compatibilism', it denotes mere compossibilism).

The ambiguity of 'incompatibilism' can be a source of confusion because arguments with very different (even inconsistent) conclusions are currently lumped together under the umbrella phrase "arguments for incompatibilism." For example, it is easy for the casual reader to overlook that some arguments for post-classical incompatibilism (a.k.a. incompossibilism) are not arguments for neo-classical incompatibilism on the grounds that the argument does not aim to support the latter's explanatory tenet (a.k.a. anti-classical incompatibilism). Other arguments support post-classical incompatibilism (a.k.a. incompossibilism) but conclude that neo-classical incompatibilism is false on the grounds that its explanatory tenet (a.k.a. anti-classical incompatibilism) is false. Arguments in the last category conclude that people lack free will when determinism is true but not at all because determinism is true (i.e. not at all because certain causal/nomological factors obtain); most propose that the real threat to free will is that people lack adequate control over their own constitutive properties, or what is often called their "constitutive luck" (as opposed to causal luck).

Libertarianism

Free-will libertarianism is the view that the free-will thesis (that we, ordinary humans, have free will) is true and that determinism is false; in first-order language, it is the view that we (ordinary humans) have free will and the world does not behave in the way described by determinism. Libertarianism is one of the popular solutions to the problem of free will, roughly the problem of settling the question of whether we have free will and the logically prior question of what free will amounts to. The main rivals to libertarianism are soft determinism and hard determinism.

Libertarian Robert Kane (editor of the Oxford Handbook of Free Will) is a leading incompatibilist philosopher in favour of free will. Kane seeks to hold persons morally responsible for decisions that involved indeterminism in their process. Critics maintain that Kane fails to overcome the greatest challenge to such an endeavor: "the argument from luck". Namely, if a critical moral choice is a matter of luck (indeterminate quantum fluctuationIs), then on what grounds can we hold a person responsible for their final action? Moreover, even if we imagine that a person can make an act of will ahead of time, to make the moral action more probable in the upcoming critical moment, this act of 'willing' was itself a matter of luck. Kane objects to the validity of the argument from luck because the latter misrepresents the chance as if it is external to the act of choosing. The free will theorem of John H. Conway and Simon B. Kochen further establishes that if we have free will, then quantum particles also possess free will. This means that starting from the assumption that humans have free will, it is possible to pinpoint the origin of their free will in the quantum particles that constitute their brain.

Such philosophical stance risks an infinite regress, however; if any such mind is real, an objection can be raised that free will would be impossible if the choosing is shaped merely by luck or chance.

Libertarianism in the philosophy of mind is unrelated to the like-named political philosophy. It suggests that we actually do have free will, that it is incompatible with determinism, and that therefore the future is not determined. For example, at this moment, one could either continue reading this article if one wanted, or cease. Under this assertion, being that one could do either, the fact of how the history of the world will continue to unfold is not currently determined one way or the other.

One famous proponent of this view was Lucretius, who asserted that the free will arises out of the random, chaotic movements of atoms, called "clinamen". One major objection to this view is that science has gradually shown that more and more of the physical world obeys completely deterministic laws, and seems to suggest that our minds are just as much part of the physical world as anything else. If these assumptions are correct, incompatibilist libertarianism can only be maintained as the claim that free will is a supernatural phenomenon, which does not obey the laws of nature (as, for instance, maintained by some religious traditions).

However, many libertarian view points now rely upon an indeterministic view of the physical universe, under the assumption that the idea of a deterministic, clockwork universe has become outdated since the advent of quantum mechanics. By assuming an indeterministic universe, libertarian philosophical constructs can be proposed under the assumption of physicalism.

There are libertarian view points based upon indeterminism and physicalism, which is closely related to naturalism. A major problem for naturalistic libertarianism is to explain how indeterminism can be compatible with rationality and with appropriate connections between an individual's beliefs, desires, general character and actions. A variety of naturalistic libertarianism is promoted by Robert Kane, who emphasizes that if our character is formed indeterministically (in "self-forming actions"), then our actions can still flow from our character, and yet still be incompatibilistically free.

Alternatively, libertarian view points based upon indeterminism have been proposed without the assumption of naturalism. At the time C. S. Lewis wrote Miracles, quantum mechanics (and physical indeterminism) was only in the initial stages of acceptance, but still Lewis stated the logical possibility that, if the physical world was proved to be indeterministic, this would provide an entry (interaction) point into the traditionally viewed closed system, where a scientifically described physically probable/improbable event could be philosophically described as an action of a non-physical entity on physical reality (noting that, under a physicalist point of view, the non-physical entity must be independent of the self-identity or mental processing of the sentient being). Lewis mentions this only in passing, making clear that his thesis does not depend on it in any way.

Others may use some form of Donald Davidson's anomalous monism to suggest that although the mind is in fact part of the physical world, it involves a different level of description of the same facts, so that although there are deterministic laws under the physical description, there are no such laws under the mental description, and thus our actions are free and not determined.

Hard determinism

Schopenhauer said "Man is free to do what he wills, but he cannot will what he wills." The Hard Determinist says that obviously, then, there is no 'free will."

Those who reject free will and accept determinism are variously known as "hard determinists", hard incompatibilists, free will skeptics, illusionists, or impossibilists. They believe that there is no 'free will' and that any sense of the contrary is an illusion. Of course, hard determinists do not deny that one has desires, but say that these desires are causally determined by an unbroken chain of prior occurrences. According to this philosophy, no wholly random, spontaneous, mysterious, or miraculous events occur. Determinists sometimes assert that it is stubborn to resist scientifically motivated determinism on purely intuitive grounds about one's own sense of freedom. They reason that the history of the development of science suggests that determinism is the logical method in which reality works.

William James said that philosophers (and scientists) have an "antipathy to chance." Absolute chance, a possible implication of quantum mechanics and the indeterminacy principle, supports the existence of indefinite causal structures.

Moral implications

Since many believe that free will is necessary for moral responsibility, hard determinism may imply disastrous consequences for their theory of ethics, resulting in a domino theory of moral nonresponsibility.

As something of a solution to this predicament, one might embrace the so-called "illusion" of free will. This thesis argues in favor of maintaining the prevailing belief in free will for the sake of preserving moral responsibility and the concept of ethics. However, critics argue that this move renders morality merely another "illusion", or else that this move is simply hypocritical.

The determinist will add that, even if denying free will does mean morality is incoherent, such an unfortunate result has no effect on the truth. Note, however, that hard determinists often have some sort of 'moral system' that relies explicitly on determinism. A determinist's moral system simply bears in mind that every person's actions in a given situation are, in theory, predicted by the interplay of environment and upbringing. For instance, the determinist may still punish undesirable behaviours for reasons of behaviour modification or deterrence.

Hard incompatibilism

Hard incompatibilism, like hard determinism, is a type of skepticism about free will. 'Hard incompatibilism' is a term coined by Derk Pereboom to designate the view that both determinism and indeterminism are incompatible with having free will and moral responsibility. Like the hard determinist, the hard incompatibilist holds that if determinism were true, our having free will would be ruled out. But Pereboom argues in addition that if our decisions were indeterministic events, free will would also be precluded. In his view, free will is the control in action required for the desert aspect of moral responsibility—for our deserving to be blamed or punished for immoral actions, and to be praised or rewarded for morally exemplary actions. He contends that if our decisions were indeterministic events, their occurrence would not be in the control of the agent in the way required for such attributions of desert. The possibility for free will that remains is libertarian agent causation, according to which agents as substances (thus not merely as having a role in events) can cause actions without being causally determined to do so. Pereboom argues that for empirical reasons it is unlikely that we are agent causes of this sort, and that as a result, it is likely that we lack free will.

Experimental research

In recent years researchers in the field of experimental philosophy have been working on determining whether ordinary people, who are not experts in this field, naturally have compatibilist or incompatibilist intuitions about determinism and moral responsibility. Some experimental work has even conducted cross-cultural studies. The debate about whether people naturally have compatibilist or incompatibilist intuitions has not come out overwhelmingly in favor of one view or the other. Still, there has been some evidence that people can naturally hold both views. For instance, when people are presented with abstract cases which ask if a person could be morally responsible for an immoral act when they could not have done otherwise, people tend to say no, or give incompatibilist answers, but when presented with a specific immoral act that a specific person committed, people tend to say that that person is morally responsible for their actions, even if they were determined (that is, people also give compatibilist answers).

Algorithmic efficiency

From Wikipedia, the free encyclopedia

In computer science, algorithmic efficiency is a property of an algorithm which relates to the amount of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on the usage of different resources. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process.

For maximum efficiency it is desirable to minimize resource usage. However, different resources such as time and space complexity cannot be compared directly, so which of two algorithms is considered to be more efficient often depends on which measure of efficiency is considered most important.

For example, bubble sort and timsort are both algorithms to sort a list of items from smallest to largest. Bubble sort sorts the list in time proportional to the number of elements squared (, see Big O notation), but only requires a small amount of extra memory which is constant with respect to the length of the list (). Timsort sorts the list in time linearithmic (proportional to a quantity times its logarithm) in the list's length (), but has a space requirement linear in the length of the list (). If large lists must be sorted at high speed for a given application, timsort is a better choice; however, if minimizing the memory footprint of the sorting is more important, bubble sort is a better choice.

Background

The importance of efficiency with respect to time was emphasised by Ada Lovelace in 1843 as applied to Charles Babbage's mechanical analytical engine:

"In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selections amongst them for the purposes of a calculating engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation"

Early electronic computers had both limited speed and limited random access memory. Therefore, a space–time trade-off occurred. A task could use a fast algorithm using a lot of memory, or it could use a slow algorithm using little memory. The engineering trade-off was then to use the fastest algorithm that could fit in the available memory.

Modern computers are significantly faster than the early computers, and have a much larger amount of memory available (Gigabytes instead of Kilobytes). Nevertheless, Donald Knuth emphasised that efficiency is still an important consideration:

"In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering"

Overview

An algorithm is considered efficient if its resource consumption, also known as computational cost, is at or below some acceptable level. Roughly speaking, 'acceptable' means: it will run in a reasonable amount of time or space on an available computer, typically as a function of the size of the input. Since the 1950s computers have seen dramatic increases in both the available computational power and in the available amount of memory, so current acceptable levels would have been unacceptable even 10 years ago. In fact, thanks to the approximate doubling of computer power every 2 years, tasks that are acceptably efficient on modern smartphones and embedded systems may have been unacceptably inefficient for industrial servers 10 years ago.

Computer manufacturers frequently bring out new models, often with higher performance. Software costs can be quite high, so in some cases the simplest and cheapest way of getting higher performance might be to just buy a faster computer, provided it is compatible with an existing computer.

There are many ways in which the resources used by an algorithm can be measured: the two most common measures are speed and memory usage; other measures could include transmission speed, temporary disk usage, long-term disk usage, power consumption, total cost of ownership, response time to external stimuli, etc. Many of these measures depend on the size of the input to the algorithm, i.e. the amount of data to be processed. They might also depend on the way in which the data is arranged; for example, some sorting algorithms perform poorly on data which is already sorted, or which is sorted in reverse order.

In practice, there are other factors which can affect the efficiency of an algorithm, such as requirements for accuracy and/or reliability. As detailed below, the way in which an algorithm is implemented can also have a significant effect on actual efficiency, though many aspects of this relate to optimization issues.

Theoretical analysis

In the theoretical analysis of algorithms, the normal practice is to estimate their complexity in the asymptotic sense. The most commonly used notation to describe resource consumption or "complexity" is Donald Knuth's Big O notation, representing the complexity of an algorithm as a function of the size of the input . Big O notation is an asymptotic measure of function complexity, where roughly means the time requirement for an algorithm is proportional to , omitting lower-order terms that contribute less than to the growth of the function as grows arbitrarily large. This estimate may be misleading when is small, but is generally sufficiently accurate when is large as the notation is asymptotic. For example, bubble sort may be faster than merge sort when only a few items are to be sorted; however either implementation is likely to meet performance requirements for a small list. Typically, programmers are interested in algorithms that scale efficiently to large input sizes, and merge sort is preferred over bubble sort for lists of length encountered in most data-intensive programs.

Some examples of Big O notation applied to algorithms' asymptotic time complexity include:

Notation Name Examples
constant Finding the median from a sorted list of measurements; Using a constant-size lookup table; Using a suitable hash function for looking up an item.
logarithmic Finding an item in a sorted array with a binary search or a balanced search tree as well as all operations in a Binomial heap.
linear Finding an item in an unsorted list or a malformed tree (worst case) or in an unsorted array; Adding two n-bit integers by ripple carry.
linearithmic, loglinear, or quasilinear Performing a Fast Fourier transform; heapsort, quicksort (best and average case), or merge sort
quadratic Multiplying two n-digit numbers by a simple algorithm; bubble sort (worst case or naive implementation), Shell sort, quicksort (worst case), selection sort or insertion sort
exponential Finding the optimal (non-approximate) solution to the travelling salesman problem using dynamic programming; determining if two logical statements are equivalent using brute-force search

Benchmarking: measuring performance

For new versions of software or to provide comparisons with competitive systems, benchmarks are sometimes used, which assist with gauging an algorithms relative performance. If a new sort algorithm is produced, for example, it can be compared with its predecessors to ensure that at least it is efficient as before with known data, taking into consideration any functional improvements. Benchmarks can be used by customers when comparing various products from alternative suppliers to estimate which product will best suit their specific requirements in terms of functionality and performance. For example, in the mainframe world certain proprietary sort products from independent software companies such as Syncsort compete with products from the major suppliers such as IBM for speed.

Some benchmarks provide opportunities for producing an analysis comparing the relative speed of various compiled and interpreted languages for example and The Computer Language Benchmarks Game compares the performance of implementations of typical programming problems in several programming languages.

Even creating "do it yourself" benchmarks can demonstrate the relative performance of different programming languages, using a variety of user specified criteria. This is quite simple, as a "Nine language performance roundup" by Christopher W. Cowell-Shah demonstrates by example.

Implementation concerns

Implementation issues can also have an effect on efficiency, such as the choice of programming language, or the way in which the algorithm is actually coded, or the choice of a compiler for a particular language, or the compilation options used, or even the operating system being used. In many cases a language implemented by an interpreter may be much slower than a language implemented by a compiler. See the articles on just-in-time compilation and interpreted languages.

There are other factors which may affect time or space issues, but which may be outside of a programmer's control; these include data alignment, data granularity, cache locality, cache coherency, garbage collection, instruction-level parallelism, multi-threading (at either a hardware or software level), simultaneous multitasking, and subroutine calls.

Some processors have capabilities for vector processing, which allow a single instruction to operate on multiple operands; it may or may not be easy for a programmer or compiler to use these capabilities. Algorithms designed for sequential processing may need to be completely redesigned to make use of parallel processing, or they could be easily reconfigured. As parallel and distributed computing grow in importance in the late 2010s, more investments are being made into efficient high-level APIs for parallel and distributed computing systems such as CUDA, TensorFlow, Hadoop, OpenMP and MPI.

Another problem which can arise in programming is that processors compatible with the same instruction set (such as x86-64 or ARM) may implement an instruction in different ways, so that instructions which are relatively fast on some models may be relatively slow on other models. This often presents challenges to optimizing compilers, which must have a great amount of knowledge of the specific CPU and other hardware available on the compilation target to best optimize a program for performance. In the extreme case, a compiler may be forced to emulate instructions not supported on a compilation target platform, forcing it to generate code or link an external library call to produce a result that is otherwise incomputable on that platform, even if it is natively supported and more efficient in hardware on other platforms. This is often the case in embedded systems with respect to floating-point arithmetic, where small and low-power microcontrollers often lack hardware support for floating-point arithmetic and thus require computationally expensive software routines to produce floating point calculations.

Measures of resource usage

Measures are normally expressed as a function of the size of the input .

The two most common measures are:

  • Time: how long does the algorithm take to complete?
  • Space: how much working memory (typically RAM) is needed by the algorithm? This has two aspects: the amount of memory needed by the code (auxiliary space usage), and the amount of memory needed for the data on which the code operates (intrinsic space usage).

For computers whose power is supplied by a battery (e.g. laptops and smartphones), or for very long/large calculations (e.g. supercomputers), other measures of interest are:

  • Direct power consumption: power needed directly to operate the computer.
  • Indirect power consumption: power needed for cooling, lighting, etc.

As of 2018, power consumption is growing as an important metric for computational tasks of all types and at all scales ranging from embedded Internet of things devices to system-on-chip devices to server farms. This trend is often referred to as green computing.

Less common measures of computational efficiency may also be relevant in some cases:

  • Transmission size: bandwidth could be a limiting factor. Data compression can be used to reduce the amount of data to be transmitted. Displaying a picture or image (e.g. Google logo) can result in transmitting tens of thousands of bytes (48K in this case) compared with transmitting six bytes for the text "Google". This is important for I/O bound computing tasks.
  • External space: space needed on a disk or other external memory device; this could be for temporary storage while the algorithm is being carried out, or it could be long-term storage needed to be carried forward for future reference.
  • Response time (latency): this is particularly relevant in a real-time application when the computer system must respond quickly to some external event.
  • Total cost of ownership: particularly if a computer is dedicated to one particular algorithm.

Time

Theory

Analyze the algorithm, typically using time complexity analysis to get an estimate of the running time as a function of the size of the input data. The result is normally expressed using Big O notation. This is useful for comparing algorithms, especially when a large amount of data is to be processed. More detailed estimates are needed to compare algorithm performance when the amount of data is small, although this is likely to be of less importance. Algorithms which include parallel processing may be more difficult to analyze.

Practice

Use a benchmark to time the use of an algorithm. Many programming languages have an available function which provides CPU time usage. For long-running algorithms the elapsed time could also be of interest. Results should generally be averaged over several tests.

Run-based profiling can be very sensitive to hardware configuration and the possibility of other programs or tasks running at the same time in a multi-processing and multi-programming environment.

This sort of test also depends heavily on the selection of a particular programming language, compiler, and compiler options, so algorithms being compared must all be implemented under the same conditions.

Space

This section is concerned with use of memory resources (registers, cache, RAM, virtual memory, secondary memory) while the algorithm is being executed. As for time analysis above, analyze the algorithm, typically using space complexity analysis to get an estimate of the run-time memory needed as a function as the size of the input data. The result is normally expressed using Big O notation.

There are up to four aspects of memory usage to consider:

  • The amount of memory needed to hold the code for the algorithm.
  • The amount of memory needed for the input data.
  • The amount of memory needed for any output data.
    • Some algorithms, such as sorting, often rearrange the input data and do not need any additional space for output data. This property is referred to as "in-place" operation.
  • The amount of memory needed as working space during the calculation.

Early electronic computers, and early home computers, had relatively small amounts of working memory. For example, the 1949 Electronic Delay Storage Automatic Calculator (EDSAC) had a maximum working memory of 1024 17-bit words, while the 1980 Sinclair ZX80 came initially with 1024 8-bit bytes of working memory. In the late 2010s, it is typical for personal computers to have between 4 and 32 GB of RAM, an increase of over 300 million times as much memory.

Caching and memory hierarchy

Current computers can have relatively large amounts of memory (possibly Gigabytes), so having to squeeze an algorithm into a confined amount of memory is much less of a problem than it used to be. But the presence of four different categories of memory can be significant:

  • Processor registers, the fastest of computer memory technologies with the least amount of storage space. Most direct computation on modern computers occurs with source and destination operands in registers before being updated to the cache, main memory and virtual memory if needed. On a processor core, there are typically on the order of hundreds of bytes or fewer of register availability, although a register file may contain more physical registers than architectural registers defined in the instruction set architecture.
  • Cache memory is the second fastest and second smallest memory available in the memory hierarchy. Caches are present in CPUs, GPUs, hard disk drives and external peripherals, and are typically implemented in static RAM. Memory caches are multi-leveled; lower levels are larger, slower and typically shared between processor cores in multi-core processors. In order to process operands in cache memory, a processing unit must fetch the data from the cache, perform the operation in registers and write the data back to the cache. This operates at speeds comparable (about 2-10 times slower) with the CPU or GPU's arithmetic logic unit or floating-point unit if in the L1 cache. It is about 10 times slower if there is an L1 cache miss and it must be retrieved from and written to the L2 cache, and a further 10 times slower if there is an L2 cache miss and it must be retrieved from an L3 cache, if present.
  • Main physical memory is most often implemented in dynamic RAM (DRAM). The main memory is much larger (typically gigabytes compared to ≈8 megabytes) than an L3 CPU cache, with read and write latencies typically 10-100 times slower. As of 2018, RAM is increasingly implemented on-chip of processors, as CPU or GPU memory.
  • Virtual memory is most often implemented in terms of secondary storage such as a hard disk, and is an extension to the memory hierarchy that has much larger storage space but much larger latency, typically around 1000 times slower than a cache miss for a value in RAM. While originally motivated to create the impression of higher amounts of memory being available than were truly available, virtual memory is more important in contemporary usage for its time-space tradeoff and enabling the usage of virtual machines. Cache misses from main memory are called page faults, and incur huge performance penalties on programs.

An algorithm whose memory needs will fit in cache memory will be much faster than an algorithm which fits in main memory, which in turn will be very much faster than an algorithm which has to resort to virtual memory. Because of this, cache replacement policies are extremely important to high-performance computing, as are cache-aware programming and data alignment. To further complicate the issue, some systems have up to three levels of cache memory, with varying effective speeds. Different systems will have different amounts of these various types of memory, so the effect of algorithm memory needs can vary greatly from one system to another.

In the early days of electronic computing, if an algorithm and its data would not fit in main memory then the algorithm could not be used. Nowadays the use of virtual memory appears to provide much memory, but at the cost of performance. If an algorithm and its data will fit in cache memory, then very high speed can be obtained; in this case minimizing space will also help minimize time. This is called the principle of locality, and can be subdivided into locality of reference, spatial locality and temporal locality. An algorithm which will not fit completely in cache memory but which exhibits locality of reference may perform reasonably well.

Criticism of the current state of programming

Software efficiency halves every 18 months, compensating Moore's Law

May goes on to state:

In ubiquitous systems, halving the instructions executed can double the battery life and big data sets bring big opportunities for better software and algorithms: Reducing the number of operations from N × N to N × log(N) has a dramatic effect when N is large ... for N = 30 billion, this change is as good as 50 years of technology improvements.

  • Software author Adam N. Rosenburg in his blog "The failure of the Digital computer", has described the current state of programming as nearing the "Software event horizon", (alluding to the fictitious "shoe event horizon" described by Douglas Adams in his Hitchhiker's Guide to the Galaxy book). He estimates there has been a 70 dB factor loss of productivity or "99.99999 percent, of its ability to deliver the goods", since the 1980s—"When Arthur C. Clarke compared the reality of computing in 2001 to the computer HAL 9000 in his book 2001: A Space Odyssey, he pointed out how wonderfully small and powerful computers were but how disappointing computer programming had become".

Nonsense mutation

From Wikipedia, the free encyclopedia

In genetics, a nonsense mutation is a point mutation in a sequence of DNA that results in a nonsense codon, or a premature stop codon in the transcribed mRNA, and leads to a truncated, incomplete, and possibly nonfunctional protein product. Nonsense mutation is not always harmful, the functional effect of a nonsense mutation depends on many aspects, such as the location of the stop codon within the coding DNA. For example, the effect of a nonsense mutation depends on the proximity of the nonsense mutation to the original stop codon, and the degree to which functional subdomains of the protein are affected. As nonsense mutations leads to premature termination of polypeptide chains; they are also called chain termination mutations.

Missense mutations differ from nonsense mutations since they are point mutations that exhibit a single nucleotide change to cause substitution of a different amino acid. A nonsense mutation also differs from a nonstop mutation, which is a point mutation that removes a stop codon. About 10% of patients facing genetic diseases have involvement with nonsense mutations. Some of the diseases that these mutations can cause are Duchenne muscular dystrophy (DMD), cystic fibrosis (CF), spinal muscular atrophy (SMA), cancers, metabolic diseases, and neurologic disorders. The rate of nonsense mutations is variable from gene-to-gene and tissue-to-tissue but gene silencing occurs in every patient with a nonsense mutation.

Simple example

    DNA: 5' - ATG ACT CAC CGA GCG CGA AGC TGA - 3'
         3' - TAC TGA GTG GCT CGC GCT TCG ACT - 5'
   mRNA: 5' - AUG ACU CAC CGA GCG CGA AGC UGA - 3'
Protein:      Met Thr His Arg Ala Arg Ser Stop

The example above begins with a 5' DNA sequence with eight nucleotides seen and its complementary strand shown below. The next row highlights the 5' mRNA strand, which is generated through transcription. Lastly, the final row showcases which the amino acids that are translated from each respective codon, with the eighth and final codon representing the stop codon. The codons corresponding to the fourth amino acid, Arginine, are highlighted because they will undergo a nonsense mutation in the following figure of this example.

    DNA: 5' - ATG ACT CAC TGA GCG CGA AGC TGA - 3'
         3' - TAC TGA GTG ACT CGC GCT TCG ACT - 5'
   mRNA: 5' - AUG ACU CAC UGA GCG CGU AGC UGA - 3'
Protein:      Met Thr His Stop

Now, suppose that a nonsense mutation was introduced at the fourth codon in the 5' DNA sequence (CGA) causing the cytosine to be replaced with thymine, yielding TGA in the 5' DNA sequence and ACT in the complementary strand. Because ACT is transcribed as UGA, it is translated as a stop codon. This leads the remaining codons of the mRNA to not be translated into protein because the stop codon is prematurely reached during translation. This can yield a truncated (i.e., abbreviated) protein product, which quite often lacks the functionality of the normal, non-mutant protein.

Possible outcomes

Deleterious

Deleterious outcomes represent the majority of nonsense mutations and are the most common outcome that is observed naturally. Deleterious nonsense mutations decreases the overall fitness and reproductive success of the organism. For example, a nonsense mutation occurring in a gene encoding a protein can cause structural or functional defects in the protein that disrupt cellular biology. Depending on the significance of the functions of this protein, this disruption now could be detrimental to the fitness and survival of that organism.

Neutral

When a nonsense mutation is neutral, it does not provide benefits or harm. These occur when the effects of the mutation are unnoticed. In other words, this means that the mutation does not positively or negatively affect the organism. As this effect is unnoticed, there is a lack of papers describing such mutations. An example of this type of nonsense mutation is one that occurs directly before the original stop codon for that given protein. Because this mutation occurred in such close proximity to the end of the protein chain, the impact of this change might not be as significant. This would suggest that this amino acid that was mutated did not have a large impact on the overall structure or function of the protein or the organism as a whole. This scenario is rare, but possible.

Beneficial

Beneficial nonsense mutations are considered as the rarest of possible nonsense mutation outcomes. Beneficial nonsense mutations increase the overall fitness and reproductive success of an organism, opposite of the effects of a deleterious mutation. Because a nonsense mutation introduces a premature stop codon within a sequence of DNA, it is extremely unlikely that this scenario can actually benefit the organism. An example of this would occur with a nonsense mutation that impacts a dysfunctional protein that releases toxins. The stop codon that this mutation brings would stop this dysfunctional protein from properly carrying out its function. Stopping this protein from performing at full strength causes less toxin to be released and the fitness of the organism to be improved. These types of situations with nonsense mutations occur a lot less frequently than the deleterious outcomes.

Suppressing nonsense mutations

Pictured on the left is a diagram of normal translation occurring without mutation. Blue circles are the peptides already translated while the grey circles are peptides going to be translated next. In the center is a diagram a nonsense mutation where the UUG codon is translated to the stop codon UAG. The stop codon recruits a release factor, terminating translation. On the right is a diagram of the tRNA suppression mechanism where the codon and the tRNA are both mutated, resulting in tRNA suppression. The mutated Tyr tRNA has the anticodon AUC which recognizes the UAG stop codon, continuing protein translation.

Nonsense-mediated mRNA decay

Despite an expected tendency for premature termination codons to yield shortened polypeptide products, in fact the formation of truncated proteins does not occur often in vivo. Many organisms—including humans and lower species, such as yeast—employ a nonsense-mediated mRNA decay pathway, which degrades mRNAs containing nonsense mutations before they are able to be translated into nonfunctional polypeptides.

tRNA Suppression

Because nonsense mutations result in altered mRNA with a premature stop codon, one way of suppressing the damage done to the final protein's function is to alter the tRNA that reads the mRNA. These tRNA’s are termed suppressor tRNA's. If the stop codon is UAG, any other amino acid tRNA could be altered from its original anticodon to AUC so it will recognize the UAG codon instead. This will result in the protein not being truncated, but it may still have an altered amino acid. These suppressor tRNA mutations are only possible if the cell has more than one tRNA that reads a particular codon, otherwise the mutation would kill the cell. The only stop codons are UAG, UAA, and UGA. UAG and UAA suppressors read their respective stop codons instead of their original codon, but UAA suppressors also read UAG due to wobble base pairing. UGA suppressors are very rare. Another hurdle to pass in this technique is the fact that stop codons are also recognized by release factors, so the tRNA still needs to compete with the release factors to keep the translation going. Because of this, suppression is usually only 10-40% successful. These suppressor tRNA mutations also target stop codons that are not mutations, causing some proteins to be much longer than they should be. Only bacteria and lower eukaryotes can survive with these mutations, mammal and insect cells die as a result of a suppressor mutation.

Common disease-associated nonsense mutations

Selection of notable mutations, ordered in a standard table of the genetic code of amino acids. nonsense mutations are marked by red arrows.

Nonsense mutations comprise around 20% of single nucleotide substitutions within protein coding sequences that result in human disease. Nonsense mutation-mediated pathology is often attributed to reduced amounts of full-length protein, because only 5-25% of transcripts possessing nonsense mutations do not undergo nonsense-mediated decay (NMD). Translation of the remaining nonsense-bearing mRNA may generate abbreviated protein variants with toxic effects.

Twenty-three different single-point nucleotide substitutions are capable of converting a non-stop codon into a stop-codon, with the mutations CGATGA and CAGTAG being the most common disease-related substitutions characterized in the Human Gene Mutation Database (HGMD). As a result of different substitution frequencies for each nucleotide, the proportions of the three stop codons generated by disease-inducing nonsense mutations differs from stop codon distributions in non-diseased gene variants. Notably, the codon TAG is overrepresented, while the TGA and TAA codons are underrepresented in disease-related nonsense mutations.

Translation termination efficiency is influenced by the specific stop codon sequence on the mRNA, with the UAA sequence yielding the highest termination. Sequences surrounding the stop codon also impact termination efficiency. Consequently, the underlying pathology of diseases caused by nonsense mutations is ultimately dependent on the identity of the mutated gene, and specific location of the mutation.

Examples of diseases induced by nonsense mutations include:

Nonsense mutations in other genes may also drive dysfunction of several tissue or organ systems:

SMAD8

SMAD8 is the eighth homolog of the ENDOGLIN gene family and is involved in the signaling between TGF-b/BMP. It has been identified that novel nonsense mutations in SMAD8 are associated with pulmonary arterial hypertension. The pulmonary system relies on SMAD1, SMAD5, and SMAD 8 to regulate pulmonary vascular function. Downregulation and loss of signals that are normally operated by SMAD8 contributed to pathogenesis in pulmonary arterial hypertension. The ALK1 gene, a part of the TGF-B signaling family, was found to have been mutated while also down-regulating the SMAD8 gene in patients with pulmonary arterial hypertension. SMAD8 mutants were not phosphorylated by ALK1, disrupting interactions with SMAD4 that would normally allow for signaling in wild-type organisms.

LGR4

LGR4 binds R-spondins to activate the Wnt signaling pathway. Wnt signaling regulates bone mass and osteoblast differentiation and is important for the development of bone, heart, and muscle. An LGR4 nonsense mutation in a healthy population has been linked to low bone mass density and symptoms of osteoporosis. LGR4 mutant mice showed the observed low bone mass is not due to age-related bone loss. Mutations in LGR4 have been associated with family lineages with medical histories of rare bone disorders. Wild-type mice lacking LGR4 also displayed delayed osteoblast differentiation during development, showcasing the important role of LGR4 in bone mass regulation and development.

Therapeutics targeting nonsense mutation diseases

Therapeutics for diseases caused by nonsense mutations attempt to recapitulate wild-type function by decreasing the efficacy of NMD, facilitating readthrough of the premature stop codon during translation, or editing the genomic nonsense mutation.

Antisense oligonucleotides to suppress the expression of NMD and translation termination proteins are being explored in animal models of nonsense mutation-induced disease. Other RNA therapeutics under investigation include synthetic suppressor tRNAs that enable ribosomes to insert an amino acid, instead of initiating chain termination, upon encountering premature stop codons.

CRISPR-Cas9 based single nucleotide substitutions have been used to generate amino acid codons from stop codons, achieving an editing success rate of 10% in cell cultures.

Read-through has been achieved using small molecule drugs such as aminoglycosides and negamycin. An oxadiazole, Ataluren (previously PTC124), facilitates the selective read-through of aberrant stop codons, rendering it a potential therapeutic against nonsense mutation-induced disease. Ataluren, sold under the tradename Translarna, is currently an approved treatment for Duchenne muscular dystrophy in the European Economic area and Brazil. However, phase III trials of Ataluren as a cystic fibrosis therapeutic have failed to meet their primary endpoints.

Quantum computing

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Quantum_computing   ...