Search This Blog

Thursday, March 7, 2019

Abstraction

From Wikipedia, the free encyclopedia

Abstraction in its main sense is a conceptual process where general rules and concepts are derived from the usage and classification of specific examples, literal ("real" or "concrete") signifiers, first principles, or other methods. 

"An abstraction" is the outcome of this process—a concept that acts as a super-categorical noun for all subordinate concepts, and connects any related concepts as a group, field, or category.

Conceptual abstractions may be formed by filtering the information content of a concept or an observable phenomenon, selecting only the aspects which are relevant for a particular subjectively valued purpose. For example, abstracting a leather soccer ball to the more general idea of a ball selects only the information on general ball attributes and behavior, excluding, but not eliminating, the other phenomenal and cognitive characteristics of that particular ball. In a type–token distinction, a type (e.g., a 'ball') is more abstract than its tokens (e.g., 'that leather soccer ball').

Abstraction in its secondary use is a material process, discussed in the themes below.

Origins

Thinking in abstractions is considered by anthropologists, archaeologists, and sociologists to be one of the key traits in modern human behaviour, which is believed to have developed between 50,000 and 100,000 years ago. Its development is likely to have been closely connected with the development of human language, which (whether spoken or written) appears to both involve and facilitate abstract thinking.

History

Abstraction involves induction of ideas or the synthesis of particular facts into one general theory about something. It is the opposite of specification, which is the analysis or breaking-down of a general idea or abstraction into concrete facts. Abstraction can be illustrated with Francis Bacon's Novum Organum (1620), a book of modern scientific philosophy written in the late Elizabethan era of England to encourage modern thinkers to collect specific facts before making any generalizations.

Bacon used and promoted induction as an abstraction tool, and it countered the ancient deductive-thinking approach that had dominated the intellectual world since the times of Greek philosophers like Thales, Anaximander, and Aristotle. Thales (c. 624–546 BCE) believed that everything in the universe comes from one main substance, water. He deduced or specified from a general idea, "everything is water", to the specific forms of water such as ice, snow, fog, and rivers. 

Modern scientists can also use the opposite approach of abstraction, or going from particular facts collected into one general idea, such as the motion of the planets (Newton (1642–1727)). When determining that the sun is the center of our solar system (Copernicus (1473–1543)), scientists had to utilize thousands of measurements to finally conclude that Mars moves in an elliptical orbit about the sun (Kepler (1571–1630)), or to assemble multiple specific facts into the law of falling bodies (Galileo (1564–1642)).

Themes

Compression

An abstraction can be seen as a compression process, mapping multiple different pieces of constituent data to a single piece of abstract data; based on similarities in the constituent data, for example, many different physical cats map to the abstraction "CAT". This conceptual scheme emphasizes the inherent equality of both constituent and abstract data, thus avoiding problems arising from the distinction between "abstract" and "concrete". In this sense the process of abstraction entails the identification of similarities between objects, and the process of associating these objects with an abstraction (which is itself an object). For example, picture 1 below illustrates the concrete relationship "Cat sits on Mat".

Chains of abstractions can be construed, moving from neural impulses arising from sensory perception to basic abstractions such as color or shape, to experiential abstractions such as a specific cat, to semantic abstractions such as the "idea" of a CAT, to classes of objects such as "mammals" and even categories such as "object" as opposed to "action".
For example, graph 1 below expresses the abstraction "agent sits on location". This conceptual scheme entails no specific hierarchical taxonomy (such as the one mentioned involving cats and mammals), only a progressive exclusion of detail.

Instantiation

Non-existent things in any particular place and time are often seen as abstract. By contrast, instances, or members, of such an abstract thing might exist in many different places and times. 

Those abstract things are then said to be multiply instantiated, in the sense of picture 1, picture 2, etc., shown below. It is not sufficient, however, to define abstract ideas as those that can be instantiated and to define abstraction as the movement in the opposite direction to instantiation. Doing so would make the concepts "cat" and "telephone" abstract ideas since despite their varying appearances, a particular cat or a particular telephone is an instance of the concept "cat" or the concept "telephone". Although the concepts "cat" and "telephone" are abstractions, they are not abstract in the sense of the objects in graph 1 below. We might look at other graphs, in a progression from cat to mammal to animal, and see that animal is more abstract than mammal; but on the other hand mammal is a harder idea to express, certainly in relation to marsupial or monotreme.

Perhaps confusingly, some philosophies refer to tropes (instances of properties) as abstract particulars—e.g., the particular redness of a particular apple is an abstract particular. This is similar to qualia and sumbebekos.

Material process

Still retaining the primary meaning of 'abstrere' or 'to draw away from', the abstraction of money, for example, works by drawing away from the particular value of things allowing completely incommensurate objects to be compared (see the section on 'Physicality' below). Karl Marx's writing on the commodity abstraction recognizes a parallel process. 

The state (polity) as both concept and material practice exemplifies the two sides of this process of abstraction. Conceptually, 'the current concept of the state is an abstraction from the much more concrete early-modern use as the standing or status of the prince, his visible estates'. At the same time, materially, the 'practice of statehood is now constitutively and materially more abstract than at the time when princes ruled as the embodiment of extended power'.

Ontological status

The way that physical objects, like rocks and trees, have being differs from the way that properties of abstract concepts or relations have being, for example the way the concrete, particular, individuals pictured in picture 1 exist differs from the way the concepts illustrated in graph 1 exist. That difference accounts for the ontological usefulness of the word "abstract". The word applies to properties and relations to mark the fact that, if they exist, they do not exist in space or time, but that instances of them can exist, potentially in many different places and times.

Physicality

A physical object (a possible referent of a concept or word) is considered concrete (not abstract) if it is a particular individual that occupies a particular place and time. However, in the secondary sense of the term 'abstraction', this physical object can carry materially abstracting processes. For example, record-keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in containers. According to Schmandt-Besserat & (1981), these clay containers contained tokens, the total of which were the count of objects being transferred. The containers thus served as something of a bill of lading or an accounts book. In order to avoid breaking open the containers for the count, marks were placed on the outside of the containers. These physical marks, in other words, acted as material abstractions of a materially abstract process of accounting, using conceptual abstractions (numbers) to communicate its meaning.

Abstract things are sometimes defined as those things that do not exist in reality or exist only as sensory experiences, like the color red. That definition, however, suffers from the difficulty of deciding which things are real (i.e. which things exist in reality). For example, it is difficult to agree to whether concepts like God, the number three, and goodness are real, abstract, or both. 

An approach to resolving such difficulty is to use predicates as a general term for whether things are variously real, abstract, concrete, or of a particular property (e.g., good). Questions about the properties of things are then propositions about predicates, which propositions remain to be evaluated by the investigator. In the graph 1 below, the graphical relationships like the arrows joining boxes and ellipses might denote predicates.

Referencing and referring

Abstractions sometimes have ambiguous referents; for example, "happiness" (when used as an abstraction) can refer to as many things as there are people and events or states of being which make them happy. Likewise, "architecture" refers not only to the design of safe, functional buildings, but also to elements of creation and innovation which aim at elegant solutions to construction problems, to the use of space, and to the attempt to evoke an emotional response in the builders, owners, viewers and users of the building.

Simplification and ordering

Abstraction uses a strategy of simplification, wherein formerly concrete details are left ambiguous, vague, or undefined; thus effective communication about things in the abstract requires an intuitive or common experience between the communicator and the communication recipient. This is true for all verbal/abstract communication. 

Conceptual graph for A Cat sitting on the Mat (graph 1)
 
Cat on Mat (picture 1)

For example, many different things can be red. Likewise, many things sit on surfaces (as in picture 1, to the right). The property of redness and the relation sitting-on are therefore abstractions of those objects. Specifically, the conceptual diagram graph 1 identifies only three boxes, two ellipses, and four arrows (and their five labels), whereas the picture 1 shows much more pictorial detail, with the scores of implied relationships as implicit in the picture rather than with the nine explicit details in the graph. 

Graph 1 details some explicit relationships between the objects of the diagram. For example, the arrow between the agent and CAT:Elsie depicts an example of an is-a relationship, as does the arrow between the location and the MAT. The arrows between the gerund/present participle SITTING and the nouns agent and location express the diagram's basic relationship; "agent is SITTING on location"; Elsie is an instance of CAT.

Although the description sitting-on (graph 1) is more abstract than the graphic image of a cat sitting on a mat (picture 1), the delineation of abstract things from concrete things is somewhat ambiguous; this ambiguity or vagueness is characteristic of abstraction. Thus something as simple as a newspaper might be specified to six levels, as in Douglas Hofstadter's illustration of that ambiguity, with a progression from abstract to concrete in Gödel, Escher, Bach (1979):
(1) a publication
(2) a newspaper
(3) The San Francisco Chronicle
(4) the May 18 edition of The San Francisco Chronicle
(5) my copy of the May 18 edition of The San Francisco Chronicle
(6) my copy of the May 18 edition of The San Francisco Chronicle as it was when I first picked it up (as contrasted with my copy as it was a few days later: in my fireplace, burning)
An abstraction can thus encapsulate each of these levels of detail with no loss of generality. But perhaps a detective or philosopher/scientist/engineer might seek to learn about something, at progressively deeper levels of detail, to solve a crime or a puzzle.

Thought processes

In philosophical terminology, abstraction is the thought process wherein ideas are distanced from objects.

As used in different disciplines

In art

Typically, abstraction is used in the arts as a synonym for abstract art in general. Strictly speaking, it refers to art unconcerned with the literal depiction of things from the visible world—it can, however, refer to an object or image which has been distilled from the real world, or indeed, another work of art. Artwork that reshapes the natural world for expressive purposes is called abstract; that which derives from, but does not imitate a recognizable subject is called nonobjective abstraction. In the 20th century the trend toward abstraction coincided with advances in science, technology, and changes in urban life, eventually reflecting an interest in psychoanalytic theory. Later still, abstraction was manifest in more purely formal terms, such as color, freedom from objective context, and a reduction of form to basic geometric designs.

In computer science

Computer scientists use abstraction to make models that can be used and re-used without having to re-write all the program code for each new application on every different type of computer. They communicate their solutions with the computer by writing source code in some particular computer language which can be translated into machine code for different types of computers to execute. Abstraction allows program designers to separate a framework (categorical concepts related to computing problems) from specific instances which implement details. This means that the program code can be written so that code doesn't have to depend on the specific details of supporting applications, operating system software, or hardware, but on a categorical concept of the solution. A solution to the problem can then be integrated into the system framework with minimal additional work. This allows programmers to take advantage of another programmer's work, while requiring only an abstract understanding of the implementation of another's work, apart from the problem that it solves.

In general semantics

Abstractions and levels of abstraction play an important role in the theory of general semantics originated by Alfred Korzybski. Anatol Rapoport wrote: "Abstracting is a mechanism by which an infinite variety of experiences can be mapped on short noises (words)."

In history

Francis Fukuyama defines history as "a deliberate attempt of abstraction in which we separate out important from unimportant events".

In linguistics

Researchers in linguistics frequently apply abstraction so as to allow analysis of the phenomena of language at the desired level of detail. A commonly used abstraction, the phoneme, abstracts speech sounds in such a way as to neglect details that cannot serve to differentiate meaning. Other analogous kinds of abstractions (sometimes called "emic units") considered by linguists include morphemes, graphemes, and lexemes

Abstraction also arises in the relation between syntax, semantics, and pragmatics. Pragmatics involves considerations that make reference to the user of the language; semantics considers expressions and what they denote (the designata) abstracted from the language user; and syntax considers only the expressions themselves, abstracted from the designata.

In mathematics

Abstraction in mathematics is the process of extracting the underlying essence of a mathematical concept, removing any dependence on real world objects with which it might originally have been connected, and generalizing it so that it has wider applications or matching among other abstract descriptions of equivalent phenomena. 

The advantages of abstraction in mathematics are:
  • It reveals deep connections between different areas of mathematics
  • Known results in one area can suggest conjectures in a related area
  • Techniques and methods from one area can be applied to prove results in a related area
The main disadvantage of abstraction is that highly abstract concepts are more difficult to learn, and require a degree of mathematical maturity and experience before they can be assimilated.

In music

In music, the term abstraction can be used to describe improvisatory approaches to interpretation, and may sometimes indicate abandonment of tonality. Atonal music has no key signature, and is characterized by the exploration of internal numeric relationships.

In neurology

A recent meta-analysis suggests that the verbal system has greater engagement for abstract concepts when the perceptual system is more engaged for processing of concrete concepts. This is because abstract concepts elicit greater brain activity in the inferior frontal gyrus and middle temporal gyrus compared to concrete concepts which elicit greater activity in the posterior cingulate, precuneus, fusiform gyrus, and parahippocampal gyrus. Other research into the human brain suggests that the left and right hemispheres differ in their handling of abstraction. For example, one meta-analysis reviewing human brain lesions has shown a left hemisphere bias during tool usage.

In philosophy

Abstraction in philosophy is the process (or, to some, the alleged process) in concept formation of recognizing some set of common features in individuals, and on that basis forming a concept of that feature. The notion of abstraction is important to understanding some philosophical controversies surrounding empiricism and the problem of universals. It has also recently become popular in formal logic under predicate abstraction. Another philosophical tool for discussion of abstraction is thought space.

In psychology

Carl Jung's definition of abstraction broadened its scope beyond the thinking process to include exactly four mutually exclusive, different complementary psychological functions: sensation, intuition, feeling, and thinking. Together they form a structural totality of the differentiating abstraction process. Abstraction operates in one of these functions when it excludes the simultaneous influence of the other functions and other irrelevancies, such as emotion. Abstraction requires selective use of this structural split of abilities in the psyche. The opposite of abstraction is concretism. Abstraction is one of Jung's 57 definitions in Chapter XI of Psychological Types.
There is an abstract thinking, just as there is abstract feeling, sensation and intuition. Abstract thinking singles out the rational, logical qualities ... Abstract feeling does the same with ... its feeling-values. ... I put abstract feelings on the same level as abstract thoughts. ... Abstract sensation would be aesthetic as opposed to sensuous sensation and abstract intuition would be symbolic as opposed to fantastic intuition. (Jung, [1921] (1971): par. 678).

In social theory

In social theory, abstraction is used as both an ideational and material process. Alfred Sohn-Rethel, asked "Can there be abstraction other than by thought?" He used the example of commodity abstraction to show that abstraction occurs in practice as people create systems of abstract exchange that extend beyond the immediate physicality of the object and yet have real and immediate consequences. This work was extended through the 'Constitutive Abstraction' approach of writers associated with the Journal Arena. Two books that have taken this theme of the abstraction of social relations as an organizing process in human history are Nation Formation: Towards a Theory of Abstract Community.(1996) and the second volume of Towards a Theory of Abstract Community, published in 2006: Globalism, Nationalism, Tribalism: Bringing Theory Back In – Volume 2 of Towards a Theory of Abstract Community. These books argue that the nation is an abstract community bringing together strangers who will never meet as such; thus constituting materially real and substantial, but abstracted and mediated relations. The books suggest that contemporary processes of globalization and mediatization have contributed to materially abstracting relations between people, with major consequences for how we live our lives.

It can be easily argued that abstraction is an elementary methodological tool in several disciplines of social science. These disciplines have definite and different man concepts that highlight those aspects of man and his behaviour by idealization that are relevant for the given human science. For example, homo sociologicus is the man as sociology abstracts and idealizes it, depicting man as a social being. Moreover, we could talk about homo cyber sapiens (the man who can extend his biologically determined intelligence thanks to new technologies), or homo creativus (who is simply creative). 

Abstraction (combined with Weberian idealization) plays a crucial role in economics. Breaking away from directly experienced reality was a common trend in 19th century sciences (especially physics), and this was the effort which was fundamentally determined the way economics tried and still tries to approach the economic aspects of social life. It is abstraction we meet in the case of both Newton’s physics and the neoclassical theory, since the goal was to grasp the unchangeable and timeless essence of phenomena. For example, Newton created the concept of the material point by following the abstraction method so that he abstracted from the dimension and shape of any perceptible object, preserving only inertial and translational motion. Material point is the ultimate and common feature of all bodies. Neoclassical economists created the indefinitely abstract notion of homo economicus by following the same procedure. Economists abstract from all individual and personal qualities in order to get to those characteristics that embody the essence of economic activity. Eventually, it is the substance of the economic man that they try to grasp. Any characteristic beyond it only disturbs the functioning of this essential core.

Theoretical computer science

From Wikipedia, the free encyclopedia

An artistic representation of a Turing machine. Turing machines are used to model general computing devices.
 
Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on more mathematical topics of computing and includes the theory of computation.

It is difficult to circumscribe the theoretical areas precisely. The ACM's Special Interest Group on Algorithms and Computation Theory (SIGACT) provides the following description:

History

While logical it is inference and mathematical proof had existed previously, in 1931 Kurt Gödel proved with his incompleteness theorem that there are fundamental limitations on what statements could be proved or disproved. 

These developments have led to the modern study of logic and computability, and indeed the field of theoretical computer science as a whole. Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon. In the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks and parallel distributed processing were established. In 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory.

With the development of quantum mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously. This led to the concept of a quantum computer in the latter half of the 20th century that took off in the 1990s when Peter Shor showed that such methods could be used to factor large numbers in polynomial time, which, if implemented, would render most modern public key cryptography systems uselessly insecure.

Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed.

Topics

Algorithms

An algorithm is a step-by-step procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning

An algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.

Data structures

A data structure is a particular way of organizing data in a computer so that it can be used efficiently.

Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, databases use B-tree indexes for small percentages of data retrieval and compilers and databases use dynamic hash tables as look up tables.

Data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in both main memory and in secondary memory.

Computational complexity theory

Computational complexity theory is a branch of the theory of computation that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm

A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do.

Distributed computation

Distributed computing studies distributed systems. A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages. The components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications

A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs. There are many alternatives for the message passing mechanism, including RPC-like connectors and message queues. An important goal and challenge of distributed systems is location transparency.

Parallel computation

Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved "in parallel". There are several different forms of parallel computing: bit-level, instruction level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.

Parallel computer programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance. 

The maximum possible speed-up of a single program as a result of parallelization is known as Amdahl's law.

Very-large-scale integration

Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device. Before the introduction of VLSI technology most ICs had a limited set of functions they could perform. An electronic circuit might consist of a CPU, ROM, RAM and other glue logic. VLSI allows IC makers to add all of these circuits into one chip.

Machine learning

Machine learning is a scientific discipline that deals with the construction and study of algorithms that can learn from data. Such algorithms operate by building a model based on inputs and using that to make predictions or decisions, rather than following only explicitly programmed instructions.

Machine learning can be considered a subfield of computer science and statistics. It has strong ties to artificial intelligence and optimization, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-based algorithms is infeasible. Example applications include spam filtering, optical character recognition (OCR), search engines and computer vision. Machine learning is sometimes conflated with data mining, although that focuses more on exploratory data analysis. Machine learning and pattern recognition "can be viewed as two facets of the same field."

Computational biology

Computational biology involves the development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, behavioral, and social systems. The field is broadly defined and includes foundations in computer science, applied mathematics, animation, statistics, biochemistry, chemistry, biophysics, molecular biology, genetics, genomics, ecology, evolution, anatomy, neuroscience, and visualization.

Computational biology is different from biological computation, which is a subfield of computer science and computer engineering using bioengineering and biology to build computers, but is similar to bioinformatics, which is an interdisciplinary science using computers to store and process biological data.

Computational geometry

Computational geometry is a branch of computer science devoted to the study of algorithms that can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. While modern computational geometry is a recent development, it is one of the oldest fields of computing with history stretching back to antiquity. An ancient precursor is the Sanskrit treatise Shulba Sutras, or "Rules of the Chord", that is a book of algorithms written in 800 BCE. The book prescribes step-by-step procedures for constructing geometric objects like altars using a peg and chord. 

The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come from mathematical visualization.

Other important applications of computational geometry include robotics (motion planning and visibility problems), geographic information systems (GIS) (geometrical location and search, route planning), integrated circuit design (IC geometry design and verification), computer-aided engineering (CAE) (mesh generation), computer vision (3D reconstruction).

Information theory

Information theory is a branch of applied mathematics, electrical engineering, and computer science involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Since its inception it has broadened to find applications in many other areas, including statistical inference, natural language processing, cryptography, neurobiology, the evolution and function of molecular codes, model selection in statistics, thermal physics, quantum computing, linguistics, plagiarism detection, pattern recognition, anomaly detection and other forms of data analysis.

Applications of fundamental topics of information theory include lossless data compression (e.g. ZIP files), lossy data compression (e.g. MP3s and JPEGs), and channel coding (e.g. for Digital Subscriber Line (DSL)). The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, and electrical engineering. Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields. Important sub-fields of information theory are source coding, channel coding, algorithmic complexity theory, algorithmic information theory, information-theoretic security, and measures of information.

Cryptography

Cryptography is the practice and study of techniques for secure communication in the presence of third parties (called adversaries). More generally, it is about constructing and analyzing protocols that overcome the influence of adversaries and that are related to various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation. Modern cryptography intersects the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce

Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in integer factorization algorithms, and faster computing technology require these solutions to be continually adapted. There exist information-theoretically secure schemes that provably cannot be broken even with unlimited computing power—an example is the one-time pad—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms.

Quantum computation

A quantum computer is a computation system that makes direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses qubits (quantum bits), which can be in superpositions of states. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Yuri Manin in 1980 and Richard Feynman in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968.

As of 2014, quantum computing is still in its infancy but experiments have been carried out in which quantum computational operations were executed on a very small number of qubits. Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis.

Information-based complexity

Information-based complexity (IBC) studies optimal algorithms and computational complexity for continuous problems. IBC has studied continuous problems as path integration, partial differential equations, systems of ordinary differential equations, nonlinear equations, integral equations, fixed points, and very-high-dimensional integration.

Computational number theory

Computational number theory, also known as algorithmic number theory, is the study of algorithms for performing number theoretic computations. The best known problem in the field is integer factorization.

Symbolic computation

Computer algebra, also called symbolic computation or algebraic computation is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although, properly speaking, computer algebra should be a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have not any given value and are thus manipulated as symbols (therefore the name of symbolic computation). 

Software applications that perform symbolic calculations are called computer algebra systems, with the term system alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a user interface for the input/output of mathematical expressions, a large set of routines to perform usual operations, like simplification of expressions, differentiation using chain rule, polynomial factorization, indefinite integration, etc.

Program semantics

In programming language theory, semantics is the field concerned with the rigorous mathematical study of the meaning of programming languages. It does so by evaluating the meaning of syntactically legal strings defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically illegal strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will execute on a certain platform, hence creating a model of computation.

Formal methods

Formal methods are a particular kind of mathematics based techniques for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.

Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.

Automata theory

Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science, under Discrete mathematics. Automata comes from the Greek word αὐτόματα meaning "self-acting". 

Automata Theory is the study of self-operating virtual machines to help in logical understanding of input and output process, without or with intermediate stage(s) of computation (or any function / process).

Coding theory

Coding theory is the study of the properties of codes and their fitness for a specific application. Codes are used for data compression, cryptography, error-correction and more recently also for network coding. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction (or detection) of errors in the transmitted data.

Computational learning theory

Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning. In supervised learning, an algorithm is given samples that are labeled in some useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. The algorithm takes these previously labeled samples and uses them to induce a classifier. This classifier is a function that assigns labels to samples including the samples that have never been previously seen by the algorithm. The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples.

Organizations

Journals and newsletters

Conferences

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...