Search This Blog

Tuesday, November 8, 2022

Data compression

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
 
In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder.

The process of reducing the size of a data file is often referred to as data compression. In the context of data transmission, it is called source coding; encoding done at the source of the data before it is stored or transmitted. Source coding should not be confused with channel coding, for error detection and correction or line coding, the means for mapping data onto a signal.

Compression is useful because it reduces the resources required to store and transmit data. Computational resources are consumed in the compression and decompression processes. Data compression is subject to a space–time complexity trade-off. For instance, a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be viewed as it is being decompressed, and the option to decompress the video in full before watching it may be inconvenient or require additional storage. The design of data compression schemes involves trade-offs among various factors, including the degree of compression, the amount of distortion introduced (when using lossy data compression), and the computational resources required to compress and decompress the data.

Lossless

Lossless data compression algorithms usually exploit statistical redundancy to represent data without losing any information, so that the process is reversible. Lossless compression is possible because most real-world data exhibits statistical redundancy. For example, an image may have areas of color that do not change over several pixels; instead of coding "red pixel, red pixel, ..." the data may be encoded as "279 red pixels". This is a basic example of run-length encoding; there are many schemes to reduce file size by eliminating redundancy.

The Lempel–Ziv (LZ) compression methods are among the most popular algorithms for lossless storage. DEFLATE is a variation on LZ optimized for decompression speed and compression ratio, but compression can be slow. In the mid-1980s, following work by Terry Welch, the Lempel–Ziv–Welch (LZW) algorithm rapidly became the method of choice for most general-purpose compression systems. LZW is used in GIF images, programs such as PKZIP, and hardware devices such as modems. LZ methods use a table-based compression model where table entries are substituted for repeated strings of data. For most LZ methods, this table is generated dynamically from earlier data in the input. The table itself is often Huffman encoded. Grammar-based codes like this can compress highly repetitive input extremely effectively, for instance, a biological data collection of the same or closely related species, a huge versioned document collection, internet archival, etc. The basic task of grammar-based codes is constructing a context-free grammar deriving a single string. Other practical grammar compression algorithms include Sequitur and Re-Pair.

The strongest modern lossless compressors use probabilistic models, such as prediction by partial matching. The Burrows–Wheeler transform can also be viewed as an indirect form of statistical modelling. In a further refinement of the direct use of probabilistic modelling, statistical estimates can be coupled to an algorithm called arithmetic coding. Arithmetic coding is a more modern coding technique that uses the mathematical calculations of a finite-state machine to produce a string of encoded bits from a series of input data symbols. It can achieve superior compression compared to other techniques such as the better-known Huffman algorithm. It uses an internal memory state to avoid the need to perform a one-to-one mapping of individual input symbols to distinct representations that use an integer number of bits, and it clears out the internal memory only after encoding the entire string of data symbols. Arithmetic coding applies especially well to adaptive data compression tasks where the statistics vary and are context-dependent, as it can be easily coupled with an adaptive model of the probability distribution of the input data. An early example of the use of arithmetic coding was in an optional (but not widely used) feature of the JPEG image coding standard. It has since been applied in various other designs including H.263, H.264/MPEG-4 AVC and HEVC for video coding.

Archive software typically has the ability to adjust the "dictionary size", where a larger size demands more random access memory during compression and decompression, but compresses stronger, especially on repeating patterns in files' content.

Lossy

In the late 1980s, digital images became more common, and standards for lossless image compression emerged. In the early 1990s, lossy compression methods began to be widely used. In these schemes, some loss of information is accepted as dropping nonessential detail can save storage space. There is a corresponding trade-off between preserving information and reducing size. Lossy data compression schemes are designed by research on how people perceive the data in question. For example, the human eye is more sensitive to subtle variations in luminance than it is to the variations in color. JPEG image compression works in part by rounding off nonessential bits of information. A number of popular compression formats exploit these perceptual differences, including psychoacoustics for sound, and psychovisuals for images and video.

Most forms of lossy compression are based on transform coding, especially the discrete cosine transform (DCT). It was first proposed in 1972 by Nasir Ahmed, who then developed a working algorithm with T. Natarajan and K. R. Rao in 1973, before introducing it in January 1974. DCT is the most widely used lossy compression method, and is used in multimedia formats for images (such as JPEG and HEIF), video (such as MPEG, AVC and HEVC) and audio (such as MP3, AAC and Vorbis).

Lossy image compression is used in digital cameras, to increase storage capacities. Similarly, DVDs, Blu-ray and streaming video use lossy video coding formats. Lossy compression is extensively used in video.

In lossy audio compression, methods of psychoacoustics are used to remove non-audible (or less audible) components of the audio signal. Compression of human speech is often performed with even more specialized techniques; speech coding is distinguished as a separate discipline from general-purpose audio compression. Speech coding is used in internet telephony, for example, audio compression is used for CD ripping and is decoded by the audio players.

Lossy compression can cause generation loss.

Theory

The theoretical basis for compression is provided by information theory and, more specifically, algorithmic information theory for lossless compression and rate–distortion theory for lossy compression. These areas of study were essentially created by Claude Shannon, who published fundamental papers on the topic in the late 1940s and early 1950s. Other topics associated with compression include coding theory and statistical inference.

Machine learning

There is a close connection between machine learning and compression. A system that predicts the posterior probabilities of a sequence given its entire history can be used for optimal data compression (by using arithmetic coding on the output distribution). Conversely, an optimal compressor can be used for prediction (by finding the symbol that compresses best, given the previous history). This equivalence has been used as a justification for using data compression as a benchmark for "general intelligence".

An alternative view can show compression algorithms implicitly map strings into implicit feature space vectors, and compression-based similarity measures compute similarity within these feature spaces. For each compressor C(.) we define an associated vector space ℵ, such that C(.) maps an input string x, corresponding to the vector norm ||~x||. An exhaustive examination of the feature spaces underlying all compression algorithms is precluded by space; instead, feature vectors chooses to examine three representative lossless compression methods, LZW, LZ77, and PPM.

According to AIXI theory, a connection more directly explained in Hutter Prize, the best possible compression of x is the smallest possible software that generates x. For example, in that model, a zip file's compressed size includes both the zip file and the unzipping software, since you can't unzip it without both, but there may be an even smaller combined form.

Data differencing

Data compression can be viewed as a special case of data differencing. Data differencing consists of producing a difference given a source and a target, with patching reproducing the target given a source and a difference. Since there is no separate source and target in data compression, one can consider data compression as data differencing with empty source data, the compressed file corresponding to a difference from nothing. This is the same as considering absolute entropy (corresponding to data compression) as a special case of relative entropy (corresponding to data differencing) with no initial data.

The term differential compression is used to emphasize the data differencing connection.

Uses

Image

Entropy coding originated in the 1940s with the introduction of Shannon–Fano coding, the basis for Huffman coding which was developed in 1950. Transform coding dates back to the late 1960s, with the introduction of fast Fourier transform (FFT) coding in 1968 and the Hadamard transform in 1969.

An important image compression technique is the discrete cosine transform (DCT), a technique developed in the early 1970s. DCT is the basis for JPEG, a lossy compression format which was introduced by the Joint Photographic Experts Group (JPEG) in 1992. JPEG greatly reduces the amount of data required to represent an image at the cost of a relatively small reduction in image quality and has become the most widely used image file format. Its highly efficient DCT-based compression algorithm was largely responsible for the wide proliferation of digital images and digital photos.

Lempel–Ziv–Welch (LZW) is a lossless compression algorithm developed in 1984. It is used in the GIF format, introduced in 1987. DEFLATE, a lossless compression algorithm specified in 1996, is used in the Portable Network Graphics (PNG) format.

Wavelet compression, the use of wavelets in image compression, began after the development of DCT coding. The JPEG 2000 standard was introduced in 2000. In contrast to the DCT algorithm used by the original JPEG format, JPEG 2000 instead uses discrete wavelet transform (DWT) algorithms. JPEG 2000 technology, which includes the Motion JPEG 2000 extension, was selected as the video coding standard for digital cinema in 2004.

Audio

Audio data compression, not to be confused with dynamic range compression, has the potential to reduce the transmission bandwidth and storage requirements of audio data. Audio compression algorithms are implemented in software as audio codecs. In both lossy and lossless compression, information redundancy is reduced, using methods such as coding, quantization, discrete cosine transform and linear prediction to reduce the amount of information used to represent the uncompressed data.

Lossy audio compression algorithms provide higher compression and are used in numerous audio applications including Vorbis and MP3. These algorithms almost all rely on psychoacoustics to eliminate or reduce fidelity of less audible sounds, thereby reducing the space required to store or transmit them.

The acceptable trade-off between loss of audio quality and transmission or storage size depends upon the application. For example, one 640 MB compact disc (CD) holds approximately one hour of uncompressed high fidelity music, less than 2 hours of music compressed losslessly, or 7 hours of music compressed in the MP3 format at a medium bit rate. A digital sound recorder can typically store around 200 hours of clearly intelligible speech in 640 MB.

Lossless audio compression produces a representation of digital data that can be decoded to an exact digital duplicate of the original. Compression ratios are around 50–60% of the original size, which is similar to those for generic lossless data compression. Lossless codecs use curve fitting or linear prediction as a basis for estimating the signal. Parameters describing the estimation and the difference between the estimation and the actual signal are coded separately.

A number of lossless audio compression formats exist. See list of lossless codecs for a listing. Some formats are associated with a distinct system, such as Direct Stream Transfer, used in Super Audio CD and Meridian Lossless Packing, used in DVD-Audio, Dolby TrueHD, Blu-ray and HD DVD.

Some audio file formats feature a combination of a lossy format and a lossless correction; this allows stripping the correction to easily obtain a lossy file. Such formats include MPEG-4 SLS (Scalable to Lossless), WavPack, and OptimFROG DualStream.

When audio files are to be processed, either by further compression or for editing, it is desirable to work from an unchanged original (uncompressed or losslessly compressed). Processing of a lossily compressed file for some purpose usually produces a final result inferior to the creation of the same compressed file from an uncompressed original. In addition to sound editing or mixing, lossless audio compression is often used for archival storage, or as master copies.

Lossy audio compression

Comparison of spectrograms of audio in an uncompressed format and several lossy formats. The lossy spectrograms show bandlimiting of higher frequencies, a common technique associated with lossy audio compression.

Lossy audio compression is used in a wide range of applications. In addition to standalone audio-only applications of file playback in MP3 players or computers, digitally compressed audio streams are used in most video DVDs, digital television, streaming media on the Internet, satellite and cable radio, and increasingly in terrestrial radio broadcasts. Lossy compression typically achieves far greater compression than lossless compression, by discarding less-critical data based on psychoacoustic optimizations.

Psychoacoustics recognizes that not all data in an audio stream can be perceived by the human auditory system. Most lossy compression reduces redundancy by first identifying perceptually irrelevant sounds, that is, sounds that are very hard to hear. Typical examples include high frequencies or sounds that occur at the same time as louder sounds. Those irrelevant sounds are coded with decreased accuracy or not at all.

Due to the nature of lossy algorithms, audio quality suffers a digital generation loss when a file is decompressed and recompressed. This makes lossy compression unsuitable for storing the intermediate results in professional audio engineering applications, such as sound editing and multitrack recording. However, lossy formats such as MP3 are very popular with end-users as the file size is reduced to 5-20% of the original size and a megabyte can store about a minute's worth of music at adequate quality.

Coding methods

To determine what information in an audio signal is perceptually irrelevant, most lossy compression algorithms use transforms such as the modified discrete cosine transform (MDCT) to convert time domain sampled waveforms into a transform domain, typically the frequency domain. Once transformed, component frequencies can be prioritized according to how audible they are. Audibility of spectral components is assessed using the absolute threshold of hearing and the principles of simultaneous masking—the phenomenon wherein a signal is masked by another signal separated by frequency—and, in some cases, temporal masking—where a signal is masked by another signal separated by time. Equal-loudness contours may also be used to weigh the perceptual importance of components. Models of the human ear-brain combination incorporating such effects are often called psychoacoustic models.

Other types of lossy compressors, such as the linear predictive coding (LPC) used with speech, are source-based coders. LPC uses a model of the human vocal tract to analyze speech sounds and infer the parameters used by the model to produce them moment to moment. These changing parameters are transmitted or stored and used to drive another model in the decoder which reproduces the sound.

Lossy formats are often used for the distribution of streaming audio or interactive communication (such as in cell phone networks). In such applications, the data must be decompressed as the data flows, rather than after the entire data stream has been transmitted. Not all audio codecs can be used for streaming applications.

Latency is introduced by the methods used to encode and decode the data. Some codecs will analyze a longer segment, called a frame, of the data to optimize efficiency, and then code it in a manner that requires a larger segment of data at one time to decode. The inherent latency of the coding algorithm can be critical; for example, when there is a two-way transmission of data, such as with a telephone conversation, significant delays may seriously degrade the perceived quality.

In contrast to the speed of compression, which is proportional to the number of operations required by the algorithm, here latency refers to the number of samples that must be analyzed before a block of audio is processed. In the minimum case, latency is zero samples (e.g., if the coder/decoder simply reduces the number of bits used to quantize the signal). Time domain algorithms such as LPC also often have low latencies, hence their popularity in speech coding for telephony. In algorithms such as MP3, however, a large number of samples have to be analyzed to implement a psychoacoustic model in the frequency domain, and latency is on the order of 23 ms.

Speech encoding

Speech encoding is an important category of audio data compression. The perceptual models used to estimate what aspects of speech a human ear can hear are generally somewhat different from those used for music. The range of frequencies needed to convey the sounds of a human voice is normally far narrower than that needed for music, and the sound is normally less complex. As a result, speech can be encoded at high quality using a relatively low bit rate.

This is accomplished, in general, by some combination of two approaches:

  • Only encoding sounds that could be made by a single human voice.
  • Throwing away more of the data in the signal—keeping just enough to reconstruct an "intelligible" voice rather than the full frequency range of human hearing.

The earliest algorithms used in speech encoding (and audio data compression in general) were the A-law algorithm and the μ-law algorithm.

History

Solidyne 922: The world's first commercial audio bit compression sound card for PC, 1990

Early audio research was conducted at Bell Labs. There, in 1950, C. Chapin Cutler filed the patent on differential pulse-code modulation (DPCM). In 1973, Adaptive DPCM (ADPCM) was introduced by P. Cummiskey, Nikil S. Jayant and James L. Flanagan.

Perceptual coding was first used for speech coding compression, with linear predictive coding (LPC). Initial concepts for LPC date back to the work of Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966. During the 1970s, Bishnu S. Atal and Manfred R. Schroeder at Bell Labs developed a form of LPC called adaptive predictive coding (APC), a perceptual coding algorithm that exploited the masking properties of the human ear, followed in the early 1980s with the code-excited linear prediction (CELP) algorithm which achieved a significant compression ratio for its time. Perceptual coding is used by modern audio compression formats such as MP3 and AAC.

Discrete cosine transform (DCT), developed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974, provided the basis for the modified discrete cosine transform (MDCT) used by modern audio compression formats such as MP3, Dolby Digital, and AAC. MDCT was proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987, following earlier work by Princen and Bradley in 1986.

The world's first commercial broadcast automation audio compression system was developed by Oscar Bonello, an engineering professor at the University of Buenos Aires. In 1983, using the psychoacoustic principle of the masking of critical bands first published in 1967, he started developing a practical application based on the recently developed IBM PC computer, and the broadcast automation system was launched in 1987 under the name Audicom. Twenty years later, almost all the radio stations in the world were using similar technology manufactured by a number of companies.

A literature compendium for a large variety of audio coding systems was published in the IEEE's Journal on Selected Areas in Communications (JSAC), in February 1988. While there were some papers from before that time, this collection documented an entire variety of finished, working audio coders, nearly all of them using perceptual techniques and some kind of frequency analysis and back-end noiseless coding.

Video

Uncompressed video requires a very high data rate. Although lossless video compression codecs perform at a compression factor of 5 to 12, a typical H.264 lossy compression video has a compression factor between 20 and 200.

The two key video compression techniques used in video coding standards are the discrete cosine transform (DCT) and motion compensation (MC). Most video coding standards, such as the H.26x and MPEG formats, typically use motion-compensated DCT video coding (block motion compensation).

Most video codecs are used alongside audio compression techniques to store the separate but complementary data streams as one combined package using so-called container formats.

Encoding theory

Video data may be represented as a series of still image frames. Such data usually contains abundant amounts of spatial and temporal redundancy. Video compression algorithms attempt to reduce redundancy and store information more compactly.

Most video compression formats and codecs exploit both spatial and temporal redundancy (e.g. through difference coding with motion compensation). Similarities can be encoded by only storing differences between e.g. temporally adjacent frames (inter-frame coding) or spatially adjacent pixels (intra-frame coding). Inter-frame compression (a temporal delta encoding) (re)uses data from one or more earlier or later frames in a sequence to describe the current frame. Intra-frame coding, on the other hand, uses only data from within the current frame, effectively being still-image compression.

The intra-frame video coding formats used in camcorders and video editing employ simpler compression that uses only intra-frame prediction. This simplifies video editing software, as it prevents a situation in which a compressed frame refers to data that the editor has deleted.

Usually, video compression additionally employs lossy compression techniques like quantization that reduce aspects of the source data that are (more or less) irrelevant to the human visual perception by exploiting perceptual features of human vision. For example, small differences in color are more difficult to perceive than are changes in brightness. Compression algorithms can average a color across these similar areas in a manner similar to those used in JPEG image compression. As in all lossy compression, there is a trade-off between video quality and bit rate, cost of processing the compression and decompression, and system requirements. Highly compressed video may present visible or distracting artifacts.

Other methods other than the prevalent DCT-based transform formats, such as fractal compression, matching pursuit and the use of a discrete wavelet transform (DWT), have been the subject of some research, but are typically not used in practical products. Wavelet compression is used in still-image coders and video coders without motion compensation. Interest in fractal compression seems to be waning, due to recent theoretical analysis showing a comparative lack of effectiveness of such methods.

Inter-frame coding

In inter-frame coding, individual frames of a video sequence are compared from one frame to the next, and the video compression codec records the differences to the reference frame. If the frame contains areas where nothing has moved, the system can simply issue a short command that copies that part of the previous frame into the next one. If sections of the frame move in a simple manner, the compressor can emit a (slightly longer) command that tells the decompressor to shift, rotate, lighten, or darken the copy. This longer command still remains much shorter than data generated by intra-frame compression. Usually, the encoder will also transmit a residue signal which describes the remaining more subtle differences to the reference imagery. Using entropy coding, these residue signals have a more compact representation than the full signal. In areas of video with more motion, the compression must encode more data to keep up with the larger number of pixels that are changing. Commonly during explosions, flames, flocks of animals, and in some panning shots, the high-frequency detail leads to quality decreases or to increases in the variable bitrate.

Hybrid block-based transform formats

Processing stages of a typical video encoder

Today, nearly all commonly used video compression methods (e.g., those in standards approved by the ITU-T or ISO) share the same basic architecture that dates back to H.261 which was standardized in 1988 by the ITU-T. They mostly rely on the DCT, applied to rectangular blocks of neighboring pixels, and temporal prediction using motion vectors, as well as nowadays also an in-loop filtering step.

In the prediction stage, various deduplication and difference-coding techniques are applied that help decorrelate data and describe new data based on already transmitted data.

Then rectangular blocks of remaining pixel data are transformed to the frequency domain. In the main lossy processing stage, frequency domain data gets quantized in order to reduce information that is irrelevant to human visual perception.

In the last stage statistical redundancy gets largely eliminated by an entropy coder which often applies some form of arithmetic coding.

In an additional in-loop filtering stage various filters can be applied to the reconstructed image signal. By computing these filters also inside the encoding loop they can help compression because they can be applied to reference material before it gets used in the prediction process and they can be guided using the original signal. The most popular example are deblocking filters that blur out blocking artifacts from quantization discontinuities at transform block boundaries.

History

In 1967, A.H. Robinson and C. Cherry proposed a run-length encoding bandwidth compression scheme for the transmission of analog television signals. Discrete cosine transform (DCT), which is fundamental to modern video compression, was introduced by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974.

H.261, which debuted in 1988, commercially introduced the prevalent basic architecture of video compression technology. It was the first video coding format based on DCT compression. H.261 was developed by a number of companies, including Hitachi, PictureTel, NTT, BT and Toshiba.

The most popular video coding standards used for codecs have been the MPEG standards. MPEG-1 was developed by the Motion Picture Experts Group (MPEG) in 1991, and it was designed to compress VHS-quality video. It was succeeded in 1994 by MPEG-2/H.262, which was developed by a number of companies, primarily Sony, Thomson and Mitsubishi Electric. MPEG-2 became the standard video format for DVD and SD digital television. In 1999, it was followed by MPEG-4/H.263. It was also developed by a number of companies, primarily Mitsubishi Electric, Hitachi and Panasonic.

H.264/MPEG-4 AVC was developed in 2003 by a number of organizations, primarily Panasonic, Godo Kaisha IP Bridge and LG Electronics. AVC commercially introduced the modern context-adaptive binary arithmetic coding (CABAC) and context-adaptive variable-length coding (CAVLC) algorithms. AVC is the main video encoding standard for Blu-ray Discs, and is widely used by video sharing websites and streaming internet services such as YouTube, Netflix, Vimeo, and iTunes Store, web software such as Adobe Flash Player and Microsoft Silverlight, and various HDTV broadcasts over terrestrial and satellite television.

Genetics

Genetics compression algorithms are the latest generation of lossless algorithms that compress data (typically sequences of nucleotides) using both conventional compression algorithms and genetic algorithms adapted to the specific datatype. In 2012, a team of scientists from Johns Hopkins University published a genetic compression algorithm that does not use a reference genome for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression and is less computationally intensive than the leading general-purpose compression utilities. For this, Chanda, Elhaik, and Bader introduced MAF-based encoding (MAFE), which reduces the heterogeneity of the dataset by sorting SNPs by their minor allele frequency, thus homogenizing the dataset. Other algorithms developed in 2009 and 2013 (DNAZip and GenomeZip) have compression ratios of up to 1200-fold—allowing 6 billion basepair diploid human genomes to be stored in 2.5 megabytes (relative to a reference genome or averaged over many genomes).

Outlook and currently unused potential

It is estimated that the total amount of data that is stored on the world's storage devices could be further compressed with existing compression algorithms by a remaining average factor of 4.5:1. It is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007, but when the corresponding content is optimally compressed, this only represents 295 exabytes of Shannon information.

Moral nihilism

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Moral_nihilism

Moral nihilism (also known as ethical nihilism) is the meta-ethical view that nothing is morally right or wrong.

Moral nihilism is distinct from moral relativism, which allows for actions to be wrong relative to a particular culture or individual. It is also distinct from expressivism, according to which when we make moral claims, "We are not making an effort to describe the way the world is ... we are venting our emotions, commanding others to act in certain ways, or revealing a plan of action".

Moral nihilism today broadly tends to take the form of an Error Theory: The view developed originally by J.L. Mackie in his 1977 book Ethics: Inventing Right and Wrong. Error theory and nihilism broadly take the form of a negative claim about the existence of objective values or properties. Under traditional views there are moral properties or methods which hold objectively in some sense beyond our contingent interests which morally obligate us to act. For Mackie and the Error Theorists, such properties do not exist in the world, and therefore morality conceived of by reference to objective facts must also not exist. Therefore, morality in the traditional sense does not exist.

However, holding nihilism does not necessarily imply that we should give up using moral or ethical language; some nihilists contend that it remains a useful tool. In fact Mackie and other contemporary defenders of Error Theory, such as Richard Joyce, defend the use of moral or ethical talk and action even in knowledge of their fundamental falsity. The legitimacy of this activity however is questionable and is a subject of great debate in philosophy at the moment.

Forms of nihilism

Moral nihilists agree that all claims such as 'murder is morally wrong' are not true. But different nihilistic views differ in two ways.

Some may say that such claims are neither true nor false; others say that they are all false.

Nihilists differ in the scope of their theories. Error theorists typically claim that it is only distinctively moral claims which are false; practical nihilists claim that there are no reasons for action of any kind; some nihilists extend this claim to include reasons for belief.

Ethical language: false versus not truth-apt

J. L. Mackie argues that moral assertions are only true if there are moral properties, but because there are none, all such claims are false. Under such a view moral propositions which express beliefs are then systematically in error. For under Mackie's view, if there are to be moral properties, they must be objective and therefore not amenable to differences in subjective desires and preferences. Moreover any claims that these moral properties, if they did exist would need to be intrinsically motivating by being in some primitive relation to our consciousness. They must be able of guiding us morally just by the fact of being in some clear awareness of their truth. But this is not the case, and such ideas in his views are plainly queer.

Other versions of the theory claim that moral assertions are not true because they are neither true nor false. This form of moral nihilism claims that moral beliefs and assertions presuppose the existence of moral facts that do not exist. Consider, for example, the claim that the present king of France is bald. Some argue that this claim is neither true nor false because it presupposes that there is currently a king of France, but there is not. The claim suffers from "presupposition failure". Richard Joyce argues for this form of moral nihilism under the name "fictionalism".

The scope question

Error theory is built on three principles:

  1. There are no moral features in this world; nothing is right or wrong.
  2. Therefore, no moral judgments are true.
  3. However, our sincere moral judgments try, but always fail, to describe the moral features of things.

Thus, we always lapse into error when thinking in moral terms. We are trying to state the truth when we make moral judgments. But since there is no moral truth, all of our moral claims are mistaken. Hence the error. These three principles lead to the conclusion that there is no moral knowledge. Knowledge requires truth. If there is no moral truth, there can be no moral knowledge. Thus moral values are purely chimerical.

Arguments for nihilism

Argument from queerness

The most prominent argument for nihilism is the argument from queerness.

J. L. Mackie argues that there are no objective ethical values, by arguing that they would be queer (strange):

If there were objective values, then they would be entities or qualities or relations of a very strange sort, utterly different from anything else in the universe.

For all those who also find such entities queer (prima facie implausible), there is reason to doubt the existence of objective values.

In his book Morality without Foundations: A Defense of Ethical Contextualism (1999), Mark Timmons provides a reconstruction of Mackie's views in the form of the two related arguments. These are based on the rejection of properties, facts, and relationships that do not fit within the worldview of philosophical naturalism, the idea "that everything—including any particular events, facts, properties, and so on—is part of the natural physical world that science investigates" (1999, p. 12). Timmons adds, "The undeniable attraction of this outlook in contemporary philosophy no doubt stems from the rise of modern science and the belief that science is our best avenue for discovering the nature of reality".

There are several ways in which moral properties are supposedly queer:

  • our ordinary moral discourse purports to refer to intrinsically prescriptive properties and facts "that would somehow motivate us or provide us with reasons for action independent of our desires and aversions"—but such properties and facts do not comport with philosophical naturalism.
  • given that objective moral properties supposedly supervene upon natural properties (such as biological or psychological properties), the relation between the moral properties and the natural properties is metaphysically mysterious and does not comport with philosophical naturalism.
  • a moral realist who countenances the existence of metaphysically queer properties, facts, and relations must also posit some special faculty by which we have knowledge of them.

Responses and criticisms

Christine Korsgaard responds to Mackie by saying:

Of course there are entities that meet these criteria. It's true that they are queer sorts of entities and that knowing them isn't like anything else. But that doesn't mean that they don't exist. ... For it is the most familiar fact of human life that the world contains entities that can tell us what to do and make us do it. They are people, and the other animals.

Other criticisms of the argument include noting that the very fact that such entities would have to be something fundamentally different from what we normally experience, therefore assumably outside our sphere of experience, we cannot prima facie have reason to either doubt or affirm their existence. Therefore if one had independent grounds for supposing such things to exist (such as a reductio ad absurdum of the contrary) the argument from queerness cannot give one any particular reason to think otherwise. An argument along these lines has been provided by e.g. Akeel Bilgrami.

Argument from explanatory impotence

Gilbert Harman argued that we do not need to posit the existence of objective values in order to explain our 'moral observations'.

Constructivism (philosophy of science)

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_science)

Constructivism is a view in the philosophy of science that maintains that scientific knowledge is constructed by the scientific community, which seeks to measure and construct models of the natural world. According to the constructivist, natural science, therefore, consists of mental constructs that aim to explain sensory experience and measurements.

According to constructivists, the world is independent of human minds, but knowledge of the world is always a human and social construction. Constructivism opposes the philosophy of objectivism, embracing the belief that a human can come to know the truth about the natural world not mediated by scientific approximations with different degrees of validity and accuracy.

According to constructivists, there is no single valid methodology in science but rather a diversity of useful methods.

Etymology

The term originates from psychology, education, and social constructivism. The expression "constructivist epistemology" was first used by Jean Piaget, 1967, with plural form in the famous article from the "Encyclopédie de la Pléiade" Logique et connaissance scientifique or "Logic and Scientific knowledge", an important text for epistemology. He refers directly to the mathematician Brouwer and his radical constructivism.

The terms Constructionism and constructivism are often, but should not be, used interchangeably. Constructionism is an approach to learning that was developed by Papert; the approach was greatly influenced by his work with Piaget, but it is very different. Constructionism involves the creation of a product to show learning. It is believed by constructivists that representations of physical and biological reality, including race, sexuality, and gender, as well as tables, chairs, and atoms, are socially constructed. Marx was among the first to suggest such an ambitious expansion of the power of ideas to inform the material realities of people's lives.

History

Constructivism stems from a number of philosophies. For instance, early development can be attributed to the thought of Greek philosophers such as Heraclitus (Everything flows, nothing stands still), Protagoras (Man is the measure of all things). Protagoras is clearly represented by Plato and hence the tradition as a relativist. The Pyrrhonist skeptics have also been so interpreted. (Although this is more contentious.)

Following the Renaissance and the Enlightenment, with the phenomenology and the event, Kant gives a decisive contradiction to Cartesians' epistemology that has grown since Descartes despite Giambattista Vico calling in Scienza nuova ("New Science") in 1725 that "the norm of the truth is to have made it". The Enlightenment's claim of the universality of Reason as the only true source of knowledge generated a Romantic reaction involving an emphasis on the separate natures of races, species, sexes, and types of human.

  • Gaston Bachelard, who is known for his physics psychoanalysis and the definition of an "epistemologic obstacle" that can disturb a changing of scientific paradigm as the one that occurred between classical mechanics and Einstein's relativism, opens the teleological way with "The meditation on the object takes the form of the project". In the following famous saying, he insists that the ways in which questions are posed determine the trajectory of scientific movement, before summarizing "nothing is given, all is constructed": "And, irrespective of what one might assume, in the life of a science, problems do not arise by themselves. It is precisely this that marks out a problem as being of the true scientific spirit: all knowledge is in response to a question. If there were no questions, there would be no scientific knowledge. Nothing proceeds from itself. Nothing is given. All is constructed.", Gaston Bachelard (La formation de l'esprit scientifique, 1934). While quantum mechanics is starting to grow, Gaston Bachelard makes a call for new science in Le nouvel esprit scientifique (The New Scientific Spirit).
  • Paul Valéry, a French poet (20th century) reminds us of the importance of representations and action: "We have always sought explanations when it was only representations that we could seek to invent", "My hand feels touched as well as it touches; reality says this, and nothing more".
  • This link with action, which could be called a "philosophy of action", was well represented by Spanish poet Antonio Machado: Caminante, no hay camino, se hace camino al andar. ("Traveler, there is no road; you make your own path as you walk.")
  • Ludwik Fleck establishes scientific constructivism by introducing the notions of thought collective (Denkkollektiv), and thought style (Denkstil), through which the evolution of science is much more understandable because the research objects can be described in terms of the assumptions (thought style) that are shared for practical but also inherently social reasons, or just because any thought collective tends to preserve itself. These notions have been drawn upon by Thomas Kuhn.
  • Norbert Wiener gives another defense of teleology in 1943 Behavior, Intention and Teleology and is one of the creators of cybernetics.
  • Jean Piaget, after the creation in 1955 of the International Centre for Genetic Epistemology in Geneva, first uses the expression "constructivist epistemologies" (see above). According to Ernst von Glasersfeld, Jean Piaget is "the great pioneer of the constructivist theory of knowing" (in An Exposition of Constructivism: Why Some Like it Radical, 1990) and "the most prolific constructivist in our century" (in Aspects of Radical Constructivism, 1996).
  • J. L. Austin is associated with the view that speech is not only passively describing a given reality, but it can change the (social) reality to which it is applied through speech acts.
  • Herbert A. Simon called "the sciences of the artificial" these new sciences (cybernetics, cognitive sciences, decision and organization sciences) that, because of the abstraction of their object (information, communication, decision), cannot match with the classical epistemology and its experimental method and refutability.
  • Gregory Bateson and his book Steps to an Ecology of Mind (1972).
  • George Kelly (psychologist) and his book The Psychology of Personal Constructs (1955).
  • Heinz von Foerster, invited by Jean Piaget, presented "Objects: tokens for (Eigen-)behaviors" in 1976 in Geneva at a genetic epistemology symposium, a text that would become a reference for constructivist epistemology. His epistemological arguments were summarized in the book The Dream of Reality by Lynn Segal.
  • Paul Watzlawick, who supervised in 1984 the publication of The Invented Reality: How Do We Know What We Believe We Know? (Contributions to Constructivism).
  • Ernst von Glasersfeld, has promoted since the end of the 70s radical constructivism (see below).
  • Edgar Morin and his book La méthode (1977–2004, six volumes).
  • Mioara Mugur-Schächter is also a quantum mechanics specialist.
  • Jean-Louis Le Moigne for his encyclopedic work on constructivist epistemology and his General Systems theory (see "Le Moigne's Defense of Constructivism" by Ernst von Glasersfeld).
  • Niklas Luhmann who developed "operative constructivism" in the course of developing his theory of autopoietic social systems, drawing on the works of (among others) Bachelard, Valéry, Bateson, von Foerster, von Glasersfeld, and Morin.

Constructivism and sciences

Social constructivism in sociology

One version of social constructivism contends that categories of knowledge and reality are actively created by social relationships and interactions. These interactions also alter the way in which scientific episteme is organized.

Social activity presupposes human interaction, and in the case of social construction, utilizing semiotic resources (meaning-making and signifying) with reference to social structures and institutions. Several traditions use the term Social Constructivism: psychology (after Lev Vygotsky), sociology (after Peter Berger and Thomas Luckmann, themselves influenced by Alfred Schütz), sociology of knowledge (David Bloor), sociology of mathematics (Sal Restivo), philosophy of mathematics (Paul Ernest). Ludwig Wittgenstein's later philosophy can be seen as a foundation for social constructivism, with its key theoretical concepts of language games embedded in forms of life.

Constructivism in philosophy of science

Thomas Kuhn argued that changes in scientists' views of reality not only contain subjective elements but result from group dynamics, "revolutions" in scientific practice, and changes in "paradigms". As an example, Kuhn suggested that the Sun-centric Copernican "revolution" replaced the Earth-centric views of Ptolemy not because of empirical failures but because of a new "paradigm" that exerted control over what scientists felt to be the more fruitful way to pursue their goals.

But paradigm debates are not really about relative problem-solving ability, though for good reasons they are usually couched in those terms. Instead, the issue is which paradigm should in future guide research on problems many of which neither competitor can yet claim to resolve completely. A decision between alternate ways of practicing science is called for, and in the circumstances that decision must be based less on past achievement than on future promise. ... A decision of that kind can only be made on faith.

— Thomas Kuhn, The Structure of Scientific Revolutions, pp 157-8

The view of reality as accessible only through models was called model-dependent realism by Stephen Hawking and Leonard Mlodinow. While not rejecting an independent reality, model-dependent realism says that we can know only an approximation of it provided by the intermediary of models. These models evolve over time as guided by scientific inspiration and experiments.

In the field of the social sciences, constructivism as an epistemology urges that researchers reflect upon the paradigms that may be underpinning their research, and in the light of this that they become more open to considering other ways of interpreting any results of the research. Furthermore, the focus is on presenting results as negotiable constructs rather than as models that aim to "represent" social realities more or less accurately. Norma Romm, in her book Accountability in Social Research (2001), argues that social researchers can earn trust from participants and wider audiences insofar as they adopt this orientation and invite inputs from others regarding their inquiry practices and the results thereof.

Constructivism and psychology

In psychology, constructivism refers to many schools of thought that, though extraordinarily different in their techniques (applied in fields such as education and psychotherapy), are all connected by a common critique of previous standard objectivist approaches. Constructivist psychology schools share assumptions about the active constructive nature of human knowledge. In particular, the critique is aimed at the "associationist" postulate of empiricism, "by which the mind is conceived as a passive system that gathers its contents from its environment and, through the act of knowing, produces a copy of the order of reality."

In contrast, "constructivism is an epistemological premise grounded on the assertion that, in the act of knowing, it is the human mind that actively gives meaning and order to that reality to which it is responding". The constructivist psychologies theorize about and investigate how human beings create systems for meaningfully understanding their worlds and experiences.

Constructivism and education

Joe L. Kincheloe has published numerous social and educational books on critical constructivism (2001, 2005, 2008), a version of constructivist epistemology that places emphasis on the exaggerated influence of political and cultural power in the construction of knowledge, consciousness, and views of reality. In the contemporary mediated electronic era, Kincheloe argues, dominant modes of power have never exerted such influence on human affairs. Coming from a critical pedagogical perspective, Kincheloe argues that understanding a critical constructivist epistemology is central to becoming an educated person and to the institution of just social change.

Kincheloe's characteristics of critical constructivism:

  • Knowledge is socially constructed: World and information co-construct one another
  • Consciousness is a social construction
  • Political struggles: Power plays an exaggerated role in the production of knowledge and consciousness
  • The necessity of understanding consciousness—even though it does not lend itself to traditional reductionistic modes of measurability
  • The importance of uniting logic and emotion in the process of knowledge and producing knowledge
  • The inseparability of the knower and the known
  • The centrality of the perspectives of oppressed peoples—the value of the insights of those who have suffered as the result of existing social arrangements
  • The existence of multiple realities: Making sense of a world far more complex than we originally imagined
  • Becoming humble knowledge workers: Understanding our location in the tangled web of reality
  • Standpoint epistemology: Locating ourselves in the web of reality, we are better equipped to produce our own knowledge
  • Constructing practical knowledge for critical social action
  • Complexity: Overcoming reductionism
  • Knowledge is always entrenched in a larger process
  • The centrality of interpretation: Critical hermeneutics
  • The new frontier of classroom knowledge: Personal experiences intersecting with pluriversal information
  • Constructing new ways of being human: Critical ontology

Constructivist approaches

Critical constructivism

A series of articles published in the journal Critical Inquiry (1991) served as a manifesto for the movement of critical constructivism in various disciplines, including the natural sciences. Not only truth and reality, but also "evidence", "document", "experience", "fact", "proof", and other central categories of empirical research (in physics, biology, statistics, history, law, etc.) reveal their contingent character as a social and ideological construction. Thus, a "realist" or "rationalist" interpretation is subjected to criticism. Kincheloe's political and pedagogical notion (above) has emerged as a central articulation of the concept.

Cultural constructivism

Cultural constructivism asserts that knowledge and reality are a product of their cultural context, meaning that two independent cultures will likely form different observational methodologies.

Genetic epistemology

James Mark Baldwin invented this expression, which was later popularized by Jean Piaget. From 1955 to 1980, Piaget was Director of the International Centre for Genetic Epistemology in Geneva.

Radical constructivism

Ernst von Glasersfeld was a prominent proponent of radical constructivism. This claims that knowledge is not a commodity that is transported from one mind into another. Rather, it is up to the individual to "link up" specific interpretations of experiences and ideas with their own reference of what is possible and viable. That is, the process of constructing knowledge, of understanding, is dependent on the individual's subjective interpretation of their active experience, not what "actually" occurs. Understanding and acting are seen by radical constructivists not as dualistic processes but "circularly conjoined".

Radical constructivism is closely related to second-order cybernetics.

Constructivist Foundations is a free online journal publishing peer-reviewed articles on radical constructivism by researchers from multiple domains.

Relational constructivism

Relational constructivism can be perceived as a relational consequence of radical constructivism. In contrary to social constructivism, it picks up the epistemological threads. It maintains the radical constructivist idea that humans cannot overcome their limited conditions of reception (i.e., self-referentially operating cognition). Therefore, humans are not able to come to objective conclusions about the world.

In spite of the subjectivity of human constructions of reality, relational constructivism focuses on the relational conditions applying to human perceptional processes. Björn Kraus puts it in a nutshell:

It is substantial for relational constructivism that it basically originates from an epistemological point of view, thus from the subject and its construction processes. Coming from this perspective it then focusses on the (not only social, but also material) relations under which these cognitive construction processes are performed. Consequently, it's not only about social construction processes, but about cognitive construction processes performed under certain relational conditions.

Social Constructivism

Criticisms

Numerous criticisms have been levelled at Constructivism. The most common one is that it either explicitly advocates or implicitly reduces to relativism.

Another criticism of constructivism is that it holds that the concepts of two different social formations be entirely different and incommensurate. This being the case, it is impossible to make comparative judgments about statements made according to each worldview. This is because the criteria of judgment will themselves have to be based on some worldview or other. If this is the case, then it brings into question how communication between them about the truth or falsity of any given statement could be established.

The Wittgensteinian philosopher Gavin Kitching argues that constructivists usually implicitly presuppose a deterministic view of language, which severely constrains the minds and use of words by members of societies: they are not just "constructed" by language on this view but are literally "determined" by it. Kitching notes the contradiction here: somehow, the advocate of constructivism is not similarly constrained. While other individuals are controlled by the dominant concepts of society, the advocate of constructivism can transcend these concepts and see through them.

Anti-realism

From Wikipedia, the free encyclopedia

In analytic philosophy, anti-realism is a position which encompasses many varieties such as metaphysical, mathematical, semantic, scientific, moral and epistemic. The term was first articulated by British philosopher Michael Dummett in an argument against a form of realism Dummett saw as 'colorless reductionism'.

In anti-realism, the truth of a statement rests on its demonstrability through internal logic mechanisms, such as the context principle or intuitionistic logic, in direct opposition to the realist notion that the truth of a statement rests on its correspondence to an external, independent reality. In anti-realism, this external reality is hypothetical and is not assumed.

Anti-realism in its most general sense can be understood as being in contrast to a generic realism, which holds that distinctive objects of a subject-matter exist and have properties independent of one's beliefs and conceptual schemes. The ways in which anti-realism rejects these type of claims can vary dramatically. Because this encompasses statements containing abstract ideal objects (i.e. mathematical objects), anti-realism may apply to a wide range of philosophical topics, from material objects to the theoretical entities of science, mathematical statement, mental states, events and processes, the past and the future.

Varieties

Metaphysical anti-realism

One kind of metaphysical anti-realism maintains a skepticism about the physical world, arguing either: 1) that nothing exists outside the mind, or 2) that we would have no access to a mind-independent reality, even if it exists. The latter case often takes the form of a denial of the idea that we can have 'unconceptualised' experiences (see Myth of the Given). Conversely, most realists (specifically, indirect realists) hold that perceptions or sense data are caused by mind-independent objects. But this introduces the possibility of another kind of skepticism: since our understanding of causality is that the same effect can be produced by multiple causes, there is a lack of determinacy about what one is really perceiving, as in the brain in a vat scenario. The main alternative to this sort of metaphysical anti-realism is metaphysical realism.

On a more abstract level, model-theoretic anti-realist arguments hold that a given set of symbols in a theory can be mapped onto any number of sets of real-world objects—each set being a "model" of the theory—provided the relationship between the objects is the same (compare with symbol grounding.)

In ancient Greek philosophy, nominalist (anti-realist) doctrines about universals were proposed by the Stoics, especially Chrysippus. In early modern philosophy, conceptualist anti-realist doctrines about universals were proposed by thinkers like René Descartes, John Locke, Baruch Spinoza, Gottfried Wilhelm Leibniz, George Berkeley, and David Hume. In late modern philosophy, anti-realist doctrines about knowledge were proposed by the German idealist Georg Wilhelm Friedrich Hegel. Hegel was a proponent of what is now called inferentialism: he believed that the ground for the axioms and the foundation for the validity of the inferences are the right consequences and that the axioms do not explain the consequence. Kant and Hegel held conceptualist views about universals. In contemporary philosophy, anti-realism was revived in the form of empirio-criticism, logical positivism, semantic anti-realism and scientific instrumentalism (see below).

Mathematical anti-realism

In the philosophy of mathematics, realism is the claim that mathematical entities such as 'number' have an observer-independent existence. Empiricism, which associates numbers with concrete physical objects, and Platonism, in which numbers are abstract, non-physical entities, are the preeminent forms of mathematical realism.

The "epistemic argument" against Platonism has been made by Paul Benacerraf and Hartry Field. Platonism posits that mathematical objects are abstract entities. By general agreement, abstract entities cannot interact causally with physical entities ("the truth-values of our mathematical assertions depend on facts involving platonic entities that reside in a realm outside of space-time") Whilst our knowledge of physical objects is based on our ability to perceive them, and therefore to causally interact with them, there is no parallel account of how mathematicians come to have knowledge of abstract objects.

Field developed his views into fictionalism. Benacerraf also developed the philosophy of mathematical structuralism, according to which there are no mathematical objects. Nonetheless, some versions of structuralism are compatible with some versions of realism.

Counterarguments

Anti-realist arguments hinge on the idea that a satisfactory, naturalistic account of thought processes can be given for mathematical reasoning. One line of defense is to maintain that this is false, so that mathematical reasoning uses some special intuition that involves contact with the Platonic realm, as in the argument given by Sir Roger Penrose.

Another line of defense is to maintain that abstract objects are relevant to mathematical reasoning in a way that is non causal, and not analogous to perception. This argument is developed by Jerrold Katz in his 2000 book Realistic Rationalism. In this book, he put forward a position called realistic rationalism, which combines metaphysical realism and rationalism.

A more radical defense is to deny the separation of physical world and the platonic world, i.e. the mathematical universe hypothesis (a variety of mathematicism). In that case, a mathematician's knowledge of mathematics is one mathematical object making contact with another.

Semantic anti-realism

The term "anti-realism" was introduced by Michael Dummett in his 1982 paper "Realism" in order to re-examine a number of classical philosophical disputes, involving such doctrines as nominalism, Platonic realism, idealism and phenomenalism. The novelty of Dummett's approach consisted in portraying these disputes as analogous to the dispute between intuitionism and Platonism in the philosophy of mathematics.

According to intuitionists (anti-realists with respect to mathematical objects), the truth of a mathematical statement consists in our ability to prove it. According to Platonic realists, the truth of a statement is proven in its correspondence to objective reality. Thus, intuitionists are ready to accept a statement of the form "P or Q" as true only if we can prove P or if we can prove Q. In particular, we cannot in general claim that "P or not P" is true (the law of excluded middle), since in some cases we may not be able to prove the statement "P" nor prove the statement "not P". Similarly, intuitionists object to the existence property for classical logic, where one can prove , without being able to produce any term of which holds.

Dummett argues that this notion of truth lies at the bottom of various classical forms of anti-realism, and uses it to re-interpret phenomenalism, claiming that it need not take the form of reductionism.

Dummett's writings on anti-realism draw heavily on the later writings of Ludwig Wittgenstein, concerning meaning and rule following, and can be seen as an attempt to integrate central ideas from the Philosophical Investigations into the constructive tradition of analytic philosophy deriving from Gottlob Frege.

Scientific anti-realism

In philosophy of science, anti-realism applies chiefly to claims about the non-reality of "unobservable" entities such as electrons or genes, which are not detectable with human senses.

One prominent variety of scientific anti-realism is instrumentalism, which takes a purely agnostic view towards the existence of unobservable entities, in which the unobservable entity X serves as an instrument to aid in the success of theory Y and does not require proof for the existence or non-existence of X.

Moral anti-realism

In the philosophy of ethics, moral anti-realism (or moral irrealism) is a meta-ethical doctrine that there are no objective moral values or normative facts. It is usually defined in opposition to moral realism, which holds that there are objective moral values, such that a moral claim may be either true or false. Specifically the moral anti-realist is committed to denying one of the following three statements: 

  1. The Semantic Thesis: Moral statements have meaning, they express propositions, or are the kind of things that can be true or false.
  2. The Alethic Thesis: Some moral propositions are true.
  3. The Metaphysical Thesis: The metaphysical status of moral facts is robust and ordinary, not importantly different from other facts about the world.

Different version of moral anti-realism deny different statements: specifically, non-cognitivism denies the first claim, arguing that moral statements have no meaning or truth content, error theory denies the second claim, arguing that all moral statements are false, and ethical subjectivism denies the third claim, arguing that the truth of moral statements is mind dependent.

Examples of anti-realist moral theories might be:

There is a debate as to whether moral relativism is actually an anti-realist position. While many versions deny the metaphysical thesis, some do not, as one could imagine a system of morality which requires you to obey the written laws in your country. Such a system would be a version of moral relativism, as different individuals would be required to follow different laws, but the moral facts are physical facts about the world, not mental facts, so they are metaphysically ordinary. Thus, different versions of moral relativism might be considered anti-realist or realist.

Epistemic anti-realism

Just as moral anti-realism asserts the nonexistence of normative facts, epistemic anti-realism asserts the nonexistence of facts in the domain of epistemology. Thus, the two are now sometimes grouped together as "metanormative anti-realism". Prominent defenders of epistemic anti-realism include Hartry Field, Simon Blackburn, Matthew Chrisman, and Allan Gibbard, among others.

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...