Search This Blog

Tuesday, December 4, 2018

EEG Decodes How People Navigate Complex Sequences


Summary: A new EEG study reveals strong working memory is vital to working with abstract information.

Source: University of Oregon.

Original link:  https://neurosciencenews.com/eeg-complex-sequences-120193/

To perform a song, a dance or write computer code, people need to call upon the basic elements of their craft and then order and recombine them in creative ways.

University of Oregon scientists have captured how the brain builds such complex sequences from a small set of basic elements.

Doctoral student Atsushi Kikumoto and Ulrich Mayr, a professor in the Department of Psychology, detailed their National Science Foundation-supported work in a paper published online Nov. 14 in the journal eLife.

In the study, electrical activity and oscillation patterns were measured by electroencephalogram, with electrodes on the scalp from 88 study participants, all university students, while they performed complex, sequential patterns.

“Basic elements – the alphabet of any type of performance — need to be combined in a certain order within larger chunks, and these chunks, in turn, need to be combined in a certain order to arrive at the complete sequence,” said Mayr, who directs the UO’s Cognitive Dynamics Lab. “This is at the heart of a lot of human creativity.

“For example, if you are playing a piece on the piano, your brain needs to keep track in which larger musical phrase, which bar, and which exact note you are currently at,” he said. “So, you need a kind of mental addressing system. It is this addressing system that we discovered with our EEG methods.”

Subjects memorized sequential patterns that consisted of three different angles of lines as basic elements. When participants subsequently tried to reconstruct the succession of lines, the EEG showed oscillatory patterns that Kikumoto and Mayr decoded using machine learning techniques.

It turns out that the EEG patterns kept track of the precise location within the sequence – which chunk, which position within the chunk, and which line angle people were focusing on.

eeg read out
Data from two of the experiments of a University of Oregon study show clear differences in oscillations of electrical activity generated among subjects who have either high or low levels of working memory. Those with high working memory were most successful in completing an activity that involved recalling chunks of basic elements during their performance. NeuroscienceNews.com image is credited to Ulrich Mayr.

The findings from the basic research help to understand why some people have difficulties with executing complex sequential plans, Mayr said.

Within the hierarchically organized addressing system, not everyone showed a robust EEG expression of the more abstract levels, he said. Only people with strong working memory scores – a reflection of the capacity of an individual’s mental workspace – seemed to have a crisp record of the current chunk.

“Without the chunk information they literally got lost within the mental landscape of the overall sequence,” he said.

EEG allowed the researchers to capture electrical signaling in the brain in real time. Mayr and Kikumoto are now working to complement the findings with magnetic resonance imaging to document exactly where in the brain the sequential addressing system is localized.
 
About this neuroscience research article

Funding: Funding for the research came from National Science Foundation.

Source: Jim Barlow – University of Oregon
 
Publisher: Organized by NeuroscienceNews.com.
 
Image Source: NeuroscienceNews.com image is credited to Ulrich Mayr.
 
Original Research: Open access research for “Decoding hierarchical control of sequential behavior in oscillatory EEG activity” by Atsushi Kikumoto, and Ulrich Mayrin eLife. Published November 14 2018.
 
doi:10.7554/eLife.38550

Electronic properties of graphene

From Wikipedia, the free encyclopedia

GNR band structure for zig-zag orientation. Tightbinding calculations show that zig-zag orientation is always metallic.
 
GNR band structure for armchair orientation. Tightbinding calculations show that armchair orientation can be semiconducting or metallic depending on width (chirality).

Graphene is a zero-gap semiconductor, because its conduction and valence bands meet at the Dirac points, which are six locations in momentum space, on the edge of the Brillouin zone, divided into two non-equivalent sets of three points. The two sets are labeled K and K'. The sets give graphene a valley degeneracy of gv = 2. By contrast, for traditional semiconductors the primary point of interest is generally Γ, where momentum is zero. Four electronic properties separate it from other condensed matter systems.

However, if the in-plane direction is confined, in which case it is referred to as a nanoribbon, its electronic structure is different. If it is "zig-zag", the bandgap is zero. If it is "armchair", the bandgap is non-zero (see figure).

Electronic spectrum

Electrons propagating through graphene's honeycomb lattice effectively lose their mass, producing quasi-particles that are described by a 2D analogue of the Dirac equation rather than the Schrödinger equation for spin-​12 particles.

Dispersion relation

When atoms are placed onto the graphene hexagonal lattice, the overlap between the pz(π) orbitals and the s or the px and py orbitals is zero by symmetry. The pz electrons forming the π bands in graphene can be treated independently. Within this π-band approximation, using a conventional tight-binding model, the dispersion relation (restricted to first-nearest-neighbor interactions only) that produces energy of the electrons with wave vector k is
with the nearest-neighbor (π orbitals) hopping energy γ02.8 eV and the lattice constant a2.46 Å. The conduction and valence bands, respectively, correspond to the different signs. With one pz electron per atom in this model the valence band is fully occupied, while the conduction band is vacant. The two bands touch at the zone corners (the K point in the Brillouin zone), where there is a zero density of states but no band gap. The graphene sheet thus displays a semimetallic (or zero-gap semiconductor) character, although not if rolled into a carbon nanotube, due to its curvature. Two of the six Dirac points are independent, while the rest are equivalent by symmetry. In the vicinity of the K-points the energy depends linearly on the wave vector, similar to a relativistic particle. Since an elementary cell of the lattice has a basis of two atoms, the wave function has an effective 2-spinor structure.

As a consequence, at low energies, even neglecting the true spin, the electrons can be described by an equation that is formally equivalent to the massless Dirac equation. Hence, the electrons and holes are called Dirac fermions. This pseudo-relativistic description is restricted to the chiral limit, i.e., to vanishing rest mass M0, which leads to additional features:
Here vF ~ 106 m/s (.003 c) is the Fermi velocity in graphene, which replaces the velocity of light in the Dirac theory; is the vector of the Pauli matrices; is the two-component wave function of the electrons and E is their energy.

The equation describing the electrons' linear dispersion relation is
where the wavevector k is measured from the Dirac points (the zero of energy is chosen here to coincide with the Dirac points). The equation uses a pseudospin matrix formula that describes two sublattices of the honeycomb lattice.

'Massive' electrons

Graphene's unit cell has two identical carbon atoms and two zero-energy states: one in which the electron resides on atom A, the other in which the electron resides on atom B. However, if the two atoms in the unit cell are not identical, the situation changes. Hunt et al. showed that placing hexagonal boron nitride (h-BN) in contact with graphene can alter the potential felt at atom A versus atom B enough that the electrons develop a mass and accompanying band gap of about 30 meV [0.03 Electron Volt(eV)].

The mass can be positive or negative. An arrangement that slightly raises the energy of an electron on atom A relative to atom B gives it a positive mass, while an arrangement that raises the energy of atom B produces a negative electron mass. The two versions behave alike and are indistinguishable via optical spectroscopy. An electron traveling from a positive-mass region to a negative-mass region must cross an intermediate region where its mass once again becomes zero. This region is gapless and therefore metallic. Metallic modes bounding semiconducting regions of opposite-sign mass is a hallmark of a topological phase and display much the same physics as topological insulators.

If the mass in graphene can be controlled, electrons can be confined to massless regions by surrounding them with massive regions, allowing the patterning of quantum dots, wires and other mesoscopic structures. It also produces one-dimensional conductors along the boundary. These wires would be protected against backscattering and could carry currents without dissipation.

Single-atom wave propagation

Electron waves in graphene propagate within a single-atom layer, making them sensitive to the proximity of other materials such as high-κ dielectrics, superconductors and ferromagnetics.

Electron transport

Graphene displays remarkable electron mobility at room temperature, with reported values in excess of 15000 cm2⋅V−1⋅s−1. Hole and electron mobilities were expected to be nearly identical. The mobility is nearly independent of temperature between 10 K and 100 K, which implies that the dominant scattering mechanism is defect scattering. Scattering by graphene's acoustic phonons intrinsically limits room temperature mobility to 200000 cm2⋅V−1⋅s−1 at a carrier density of 1012 cm−2, 10×106 times greater than copper.

The corresponding resistivity of graphene sheets would be 10−6 Ω⋅cm. This is less than the resistivity of silver, the lowest otherwise known at room temperature. However, on SiO
2
substrates, scattering of electrons by optical phonons of the substrate is a larger effect than scattering by graphene’s own phonons. This limits mobility to 40000 cm2⋅V−1⋅s−1.

Charge transport is affected by adsorption of contaminants such as water and oxygen molecules. This leads to non-repetitive and large hysteresis I-V characteristics. Researchers must carry out electrical measurements in vacuum. Graphene surfaces can be protected by a coating with materials such as SiN, PMMA and h-BN. In January 2015, the first stable graphene device operation in air over several weeks was reported, for graphene whose surface was protected by aluminum oxide. In 2015 lithium-coated graphene was observed to exhibit superconductivity and in 2017 evidence for unconventional superconductivity was demonstrated in single layer graphene placed on the electron-doped (non-chiral) d-wave superconductor Pr2−xCexCuO4 (PCCO).

Electrical resistance in 40-nanometer-wide nanoribbons of epitaxial graphene changes in discrete steps. The ribbons' conductance exceeds predictions by a factor of 10. The ribbons can act more like optical waveguides or quantum dots, allowing electrons to flow smoothly along the ribbon edges. In copper, resistance increases in proportion to length as electrons encounter impurities.

Transport is dominated by two modes. One is ballistic and temperature independent, while the other is thermally activated. Ballistic electrons resemble those in cylindrical carbon nanotubes. At room temperature, resistance increases abruptly at a particular length—the ballistic mode at 16 micrometres and the other at 160 nanometres.

Graphene electrons can cover micrometer distances without scattering, even at room temperature.

Despite zero carrier density near the Dirac points, graphene exhibits a minimum conductivity on the order of . The origin of this minimum conductivity is unclear. However, rippling of the graphene sheet or ionized impurities in the SiO
2
substrate may lead to local puddles of carriers that allow conduction. Several theories suggest that the minimum conductivity should be ; however, most measurements are of order or greater and depend on impurity concentration.

Near zero carrier density graphene exhibits positive photoconductivity and negative photoconductivity at high carrier density. This is governed by the interplay between photoinduced changes of both the Drude weight and the carrier scattering rate.

Graphene doped with various gaseous species (both acceptors and donors) can be returned to an undoped state by gentle heating in vacuum. Even for dopant concentrations in excess of 1012 cm−2 carrier mobility exhibits no observable change. Graphene doped with potassium in ultra-high vacuum at low temperature can reduce mobility 20-fold. The mobility reduction is reversible on removing the potassium.

Due to graphene's two dimensions, charge fractionalization (where the apparent charge of individual pseudoparticles in low-dimensional systems is less than a single quantum) is thought to occur. It may therefore be a suitable material for constructing quantum computers using anyonic circuits.

In 2018, superconductivity was reported in twisted bilayer graphene.

Excitonic properties

First-principle calculations with quasiparticle corrections and many-body effects explore the electronic and optical properties of graphene-based materials. The approach is described as three stages. With GW calculation, the properties of graphene-based materials are accurately investigated, including bulk graphene, nanoribbons, edge and surface functionalized armchair oribbons, hydrogen saturated armchair ribbons, Josephson effect in graphene SNS junctions with single localized defect, and armchair ribbon scaling properties.

Magnetic properties

In 2014 researchers magnetized graphene by placing it on an atomically smooth layer of magnetic yttrium iron garnet. The graphene's electronic properties were unaffected. Prior approaches involved doping. The dopant's presence negatively affected its electronic properties.

Strong magnetic fields

In magnetic fields of ≈10 tesla, additional plateaus of Hall conductivity at with are observed. The observation of a plateau at and the fractional quantum Hall effect at were reported.

These observations with indicate that the four-fold degeneracy (two valley and two spin degrees of freedom) of the Landau energy levels is partially or completely lifted. One hypothesis is that the magnetic catalysis of symmetry breaking is responsible for lifting the degeneracy.

Spin transport

Graphene is claimed to be an ideal material for spintronics due to its small spin-orbit interaction and the near absence of nuclear magnetic moments in carbon (as well as a weak hyperfine interaction). Electrical spin current injection and detection has been demonstrated up to room temperature. Spin coherence length above 1 micrometre at room temperature was observed, and control of the spin current polarity with an electrical gate was observed at low temperature.

Spintronic and magnetic properties can be present in graphene simultaneously. Low-defect graphene nanomeshes manufactured using a non-lithographic method exhibit large-amplitude ferromagnetism even at room temperature. Additionally a spin pumping effect is found for fields applied in parallel with the planes of few-layer ferromagnetic nanomeshes, while a magnetoresistance hysteresis loop is observed under perpendicular fields.

Dirac fluid

Charged particles in high-purity graphene behave as a strongly interacting, quasi-relativistic plasma. The particles move in a fluid-like manner, traveling along a single path and interacting with high frequency. The behavior was observed in a graphene sheet faced on both sides with a h-BN crystal sheet.

Anomalous quantum Hall effect

The quantum Hall effect is a quantum mechanical version of the Hall effect, which is the production of transverse (perpendicular to the main current) conductivity in the presence of a magnetic field. The quantization of the Hall effect at integer multiples (the "Landau level") of the basic quantity (where e is the elementary electric charge and h is Planck's constant) It can usually be observed only in very clean silicon or gallium arsenide solids at temperatures around K and high magnetic fields.

Graphene shows the quantum Hall effect with respect to conductivity quantization: the effect is anomalous in that the sequence of steps is shifted by 1/2 with respect to the standard sequence and with an additional factor of 4. Graphene's Hall conductivity is , where N is the Landau level and the double valley and double spin degeneracies give the factor of 4. These anomalies are present at room temperature, i.e. at roughly 20 °C (293 K).

This behavior is a direct result of graphene's massless Dirac electrons. In a magnetic field, their spectrum has a Landau level with energy precisely at the Dirac point. This level is a consequence of the Atiyah–Singer index theorem and is half-filled in neutral graphene, leading to the "+1/2" in the Hall conductivity. Bilayer graphene also shows the quantum Hall effect, but with only one of the two anomalies (i.e. ). In the second anomaly, the first plateau at N=0 is absent, indicating that bilayer graphene stays metallic at the neutrality point.

Unlike normal metals, graphene's longitudinal resistance shows maxima rather than minima for integral values of the Landau filling factor in measurements of the Shubnikov–de Haas oscillations, whereby the term integral quantum Hall effect. These oscillations show a phase shift of π, known as Berry’s phase. Berry’s phase arises due to the zero effective carrier mass near the Dirac points. The temperature dependence of the oscillations reveals that the carriers have a non-zero cyclotron mass, despite their zero effective mass.

Graphene samples prepared on nickel films, and on both the silicon face and carbon face of silicon carbide, show the anomalous effect directly in electrical measurements. Graphitic layers on the carbon face of silicon carbide show a clear Dirac spectrum in angle-resolved photoemission experiments. The effect is observed in cyclotron resonance and tunneling experiments.

Casimir effect

The Casimir effect is an interaction between disjoint neutral bodies provoked by the fluctuations of the electrodynamical vacuum. Mathematically it can be explained by considering the normal modes of electromagnetic fields, which explicitly depend on the boundary (or matching) conditions on the interacting bodies' surfaces. Since graphene/electromagnetic field interaction is strong for a one-atom-thick material, the Casimir effect is of interest.

Van der Waals force

The Van der Waals force (or dispersion force) is also unusual, obeying an inverse cubic, asymptotic power law in contrast to the usual inverse quartic.

Effect of substrate

The electronic properties of graphene are significantly influenced by the supporting substrate. The Si(100)/H surface does not perturb graphene's electronic properties, whereas the interaction between it and the clean Si(100) surface changes its electronic states significantly. This effect results from the covalent bonding between C and surface Si atoms, modifying the π-orbital network of the graphene layer. The local density of states shows that the bonded C and Si surface states are highly disturbed near the Fermi energy.

Functional programming

From Wikipedia, the free encyclopedia

In computer science, functional programming is a programming paradigm—a style of building the structure and elements of computer programs—that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions or declarations instead of statements. In functional code, the output value of a function depends only on the arguments that are passed to the function, so calling a function f twice with the same value for an argument x produces the same result f(x) each time; this is in contrast to procedures depending on a local or global state, which may produce different results at different times when called with the same arguments but a different program state. Eliminating side effects, i.e., changes in state that do not depend on the function inputs, can make it much easier to understand and predict the behavior of a program, which is one of the key motivations for the development of functional programming.

Functional programming has its origins in lambda calculus, a formal system developed in the 1930s to investigate computability, the Entscheidungsproblem, function definition, function application, and recursion. Many functional programming languages can be viewed as elaborations on the lambda calculus. Another well-known declarative programming paradigm, logic programming, is based on relations.

In contrast, imperative programming changes state with commands in the source code, the simplest example being assignment. Imperative programming does have subroutine functions, but these are not functions in the mathematical sense. They can have side effects that may change the value of program state. Functions without return values therefore make sense. Because of this, they lack referential transparency, i.e., the same language expression can result in different values at different times depending on the state of the executing program.

Functional programming languages have largely been emphasized in academia rather than in commercial software development. However, prominent programming languages that support functional programming such as Common Lisp, Scheme, Clojure, Wolfram Language (also known as Mathematica), Racket, Erlang, OCaml, Haskell, and F# have been used in industrial and commercial applications by a wide variety of organizations. JavaScript, one of the world's most widely distributed languages, has the properties of a dynamically typed functional language, in addition to imperative and object-oriented paradigms. Functional programming is also key to some languages that have found success in specific domains, like R (statistics), J, K and Q from Kx Systems (financial analysis), XQuery/XSLT (XML), and Opal. Widespread domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, especially in eschewing mutable values.

Programming in a functional style can also be accomplished in languages that are not specifically designed for functional programming. For example, the imperative Perl programming language has been the subject of a book describing how to apply functional programming concepts. This is also true of the PHP programming language. C++11, Java 8, and C# 3.0 all added constructs to facilitate the functional style. The Julia language also offers functional programming abilities. An interesting case is that of Scala – it is frequently written in a functional style, but the presence of side effects and mutable state place it in a grey area between imperative and functional languages.

History

Lambda calculus provides a theoretical framework for describing functions and their evaluation. It is a mathematical abstraction rather than a programming language—but it forms the basis of almost all current functional programming languages. An equivalent theoretical formulation, combinatory logic, is commonly perceived as more abstract than lambda calculus and preceded it in invention. Combinatory logic and lambda calculus were both originally developed to achieve a clearer approach to the foundations of mathematics.

An early functional-flavored language was Lisp, developed in the late 1950s for the IBM 700/7000 series scientific computers by John McCarthy while at Massachusetts Institute of Technology (MIT). Lisp first introduced many paradigmatic features of functional programming, though early Lisps were multi-paradigm languages, and incorporated support for numerous programming styles as new paradigms evolved. Later dialects, such as Scheme and Clojure, and offshoots such as Dylan and Julia, sought to simplify and rationalise Lisp around a cleanly functional core, while Common Lisp was designed to preserve and update the paradigmatic features of the numerous older dialects it replaced.

Information Processing Language (IPL) is sometimes cited as the first computer-based functional programming language. It is an assembly-style language for manipulating lists of symbols. It does have a notion of generator, which amounts to a function that accepts a function as an argument, and, since it is an assembly-level language, code can be data, so IPL can be regarded as having higher-order functions. However, it relies heavily on mutating list structure and similar imperative features.
Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book A Programming Language (ISBN 9780471430148). APL was the primary influence on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the mid-1990s, Arthur Whitney, who had previously worked with Iverson, created K, which is used commercially in financial industries along with its descendant Q.

John Backus presented FP in his 1977 Turing Award lecture "Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs". He defines functional programs as being built up in a hierarchical way by means of "combining forms" that allow an "algebra of programs"; in modern language, this means that functional programs follow the principle of compositionality. Backus's paper popularized research into functional programming, though it emphasized function-level programming rather than the lambda-calculus style now associated with functional programming.

The 1973 language ML was created by Robin Milner at the University of Edinburgh, and David Turner developed the language SASL at the University of St Andrews. Also in Edinburgh in the 1970s, Burstall and Darlington developed the functional language NPL. NPL was based on Kleene Recursion Equations and was first introduced in their work on program transformation. Burstall, MacQueen and Sannella then incorporated the polymorphic type checking from ML to produce the language Hope. ML eventually developed into several dialects, the most common of which are now OCaml and Standard ML.

Meanwhile, the development of Scheme, a simple lexically scoped and (impurely) functional dialect of Lisp, as described in the influential Lambda Papers and the classic 1985 textbook Structure and Interpretation of Computer Programs, brought awareness of the power of functional programming to the wider programming-languages community.

In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called constructive type theory), which associated functional programs with constructive proofs expressed as dependent types. This led to new approaches to interactive theorem proving and has influenced the development of subsequent functional programming languages. The lazy functional language, Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence on Haskell. With Miranda being proprietary, Haskell began with a consensus in 1987 to form an open standard for functional programming research; implementation releases have been ongoing since 1990.

More recently it has found use in niches such as parametric CAD courtesy of the OpenSCAD language built on the CSG geometry framework, although it's inability to reassign values has led to much confusion among users who are often unfamiliar with Functional programming as a concept.

Functional programming continues to be used in commercial settings.

Concepts

A number of concepts and paradigms are specific to functional programming, and generally foreign to imperative programming (including object-oriented programming). However, programming languages often cater to several programming paradigms, so programmers using "mostly imperative" languages may have utilized some of these concepts.

First-class and higher-order functions

Higher-order functions are functions that can either take other functions as arguments or return them as results. In calculus, an example of a higher-order function is the differential operator , which returns the derivative of a function .
Higher-order functions are closely related to first-class functions in that higher-order functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: "higher-order" describes a mathematical concept of functions that operate on other functions, while "first-class" is a computer science term that describes programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other first-class entities like numbers can, including as arguments to other functions and as their return values).

Higher-order functions enable partial application or currying, a technique that applies a function to its arguments one at a time, with each application returning a new function that accepts the next argument. This lets a programmer succinctly express, for example, the successor function as the addition operator partially applied to the natural number one.

Pure functions

Pure functions (or expressions) have no side effects (memory or I/O). This means that pure functions have several useful properties, many of which can be used to optimize the code:
  • If the result of a pure expression is not used, it can be removed without affecting other expressions.
  • If a pure function is called with arguments that cause no side-effects, the result is constant with respect to that argument list (sometimes called referential transparency), i.e., if calling the pure function again with the same arguments returns the same result. (This can enable caching optimizations such as memoization.)
  • If there is no data dependency between two pure expressions, their order can be reversed, or they can be performed in parallel and they cannot interfere with one another (in other terms, the evaluation of any pure expression is thread-safe).
  • If the entire language does not allow side-effects, then any evaluation strategy can be used; this gives the compiler freedom to reorder or combine the evaluation of expressions in a program (for example, using deforestation).
While most compilers for imperative programming languages detect pure functions and perform common-subexpression elimination for pure function calls, they cannot always do this for pre-compiled libraries, which generally do not expose this information, thus preventing optimizations that involve those external functions. Some compilers, such as gcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations. Fortran 95 also lets functions be designated pure. C++11 added constexpr keyword with similar semantics.

Recursion

Iteration (looping) in functional languages is usually accomplished via recursion. Recursive functions invoke themselves, letting an operation be repeated until it reaches the base case. Although some recursion requires maintaining a stack, tail recursion can be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. The Scheme language standard requires implementations to recognize and optimize tail recursion. Tail recursion optimization can be implemented by transforming the program into continuation passing style during compiling, among other approaches.

Common patterns of recursion can be abstracted away using higher-order functions, with catamorphisms and anamorphisms (or "folds" and "unfolds") being the most obvious examples. Such recursion schemes play a role analogous to built-in control structures such as loops in imperative languages.

Most general purpose functional programming languages allow unrestricted recursion and are Turing complete, which makes the halting problem undecidable, can cause unsoundness of equational reasoning, and generally requires the introduction of inconsistency into the logic expressed by the language's type system. Some special purpose languages such as Coq allow only well-founded recursion and are strongly normalizing (nonterminating computations can be expressed only with infinite streams of values called codata). As a consequence, these languages fail to be Turing complete and expressing certain functions in them is impossible, but they can still express a wide class of interesting computations while avoiding the problems introduced by unrestricted recursion. Functional programming limited to well-founded recursion with a few other constraints is called total functional programming.

Strict versus non-strict evaluation

Functional languages can be categorized by whether they use strict (eager) or non-strict (lazy) evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated. The technical difference is in the denotational semantics of expressions containing failing or divergent computations. Under strict evaluation, the evaluation of any term containing a failing subterm fails. For example, the expression:

print length([2+1, 3*2, 1/0, 5-4])

fails under strict evaluation because of the division by zero in the third element of the list. Under lazy evaluation, the length function returns the value 4 (i.e., the number of items in the list), since evaluating it does not attempt to evaluate the terms making up the list. In brief, strict evaluation always fully evaluates function arguments before invoking the function. Lazy evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself.

The usual implementation strategy for lazy evaluation in functional languages is graph reduction. Lazy evaluation is used by default in several pure functional languages, including Miranda, Clean, and Haskell.

Hughes 1984 argues for lazy evaluation as a mechanism for improving program modularity through separation of concerns, by easing independent implementation of producers and consumers of data streams. Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes an operational semantics to aid in such analysis. Harper 2009 proposes including both strict and lazy evaluation in the same language, using the language's type system to distinguish them.

Type systems

Especially since the development of Hindley–Milner type inference in the 1970s, functional programming languages have tended to use typed lambda calculus, rejecting all invalid programs at compilation time and risking false positive errors, as opposed to the untyped lambda calculus, that accepts all valid programs at compilation time and risks false negative errors, used in Lisp and its variants (such as Scheme), though they reject all invalid programs at runtime, when the information is enough to not reject valid programs. The use of algebraic datatypes makes manipulation of complex data structures convenient; the presence of strong compile-time type checking makes programs more reliable in absence of other reliability techniques like test-driven development, while type inference frees the programmer from the need to manually declare types to the compiler in most cases.

Some research-oriented functional languages such as Coq, Agda, Cayenne, and Epigram are based on intuitionistic type theory, which lets types depend on terms. Such types are called dependent types. These type systems do not have decidable type inference and are difficult to understand and program with. But dependent types can express arbitrary propositions in predicate logic. Through the Curry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formal mathematical proofs from which a compiler can generate certified code. While these languages are mainly of interest in academic research (including in formalized mathematics), they have begun to be used in engineering as well. Compcert is a compiler for a subset of the C programming language that is written in Coq and formally verified.

A limited form of dependent types called generalized algebraic data types (GADT's) can be implemented in a way that provides some of the benefits of dependently typed programming while avoiding most of its inconvenience. GADT's are available in the Glasgow Haskell Compiler, in OCaml (since version 4.00) and in Scala (as "case classes"), and have been proposed as additions to other languages including Java and C#.

Referential transparency

Functional programs do not have assignment statements, that is, the value of a variable in a functional program never changes once defined. This eliminates any chances of side effects because any variable can be replaced with its actual value at any point of execution. So, functional programs are referentially transparent.

Consider C assignment statement x = x * 10, this changes the value assigned to the variable x. Let us say that the initial value of x was 1, then two consecutive evaluations of the variable x yields 10 and 100 respectively. Clearly, replacing x = x * 10 with either 10 or 100 gives a program with different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent.

Now, consider another function such as int plusone(int x) {return x+1;} is transparent, as it does not implicitly change the input x and thus has no such side effects. Functional programs exclusively use this type of function and are therefore referentially transparent.

Functional programming in non-functional languages

It is possible to use a functional style of programming in languages that are not traditionally considered functional languages. For example, both D and Fortran 95 explicitly support pure functions.

JavaScript, Lua and Python had first class functions from their inception. Python had support for "lambda", "map", "reduce", and "filter" in 1994, as well as closures in Python 2.2, though Python 3 relegated "reduce" to the functools standard library module. First-class functions have been introduced into other mainstream languages such as PHP 5.3, Visual Basic 9, C# 3.0, and C++11.

In PHP, anonymous classes, closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style.

In Java, anonymous classes can sometimes be used to simulate closures; however, anonymous classes are not always proper replacements to closures because they have more limited capabilities. Java 8 supports lambda expressions as a replacement for some anonymous classes. However, the presence of checked exceptions in Java can make functional programming inconvenient, because it can be necessary to catch checked exceptions and then rethrow them—a problem that does not occur in other JVM languages that do not have checked exceptions, such as Scala.

In C#, anonymous classes are not necessary, because closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style in C#.

Many object-oriented design patterns are expressible in functional programming terms: for example, the strategy pattern simply dictates use of a higher-order function, and the visitor pattern roughly corresponds to a catamorphism, or fold.

Similarly, the idea of immutable data from functional programming is often included in imperative programming languages, for example the tuple in Python, which is an immutable array.

Data structures

Purely functional data structures are often represented in a different way than their imperative counterparts. For example, the array with constant access and update times is a basic component of most imperative languages, and many imperative data-structures, such as the hash table and binary heap, are based on arrays. Arrays can be replaced by maps or random access lists, which admit purely functional implementation, but have logarithmic access and update times. Purely functional data structures have persistence, a property of keeping previous versions of the data structure unmodified. In Clojure, persistent data structures are used as functional alternatives to their imperative counterparts. Persistent vectors, for example, use trees for partial updating. Calling the insert method will result in some but not all nodes being created.

Comparison to imperative programming

Functional programming is very different from imperative programming. The most significant differences stem from the fact that functional programming avoids side effects, which are used in imperative programming to implement state and I/O. Pure functional programming completely prevents side-effects and provides referential transparency.

Higher-order functions are rarely used in older imperative programming. A traditional imperative program might use a loop to traverse and modify a list. A functional program, on the other hand, would probably use a higher-order “map” function that takes a function and a list, generating and returning a new list by applying the function to each list item.

Simulating state

There are tasks (for example, maintaining a bank account balance) that often seem most naturally implemented with state. Pure functional programming performs these tasks, and I/O tasks such as accepting user input and printing to the screen, in a different way.

The pure functional programming language Haskell implements them using monads, derived from category theory. Monads offer a way to abstract certain types of computational patterns, including (but not limited to) modeling of computations with mutable state (and other side effects such as I/O) in an imperative manner without losing purity. While existing monads may be easy to apply in a program, given appropriate templates and examples, many students find them difficult to understand conceptually, e.g., when asked to define new monads (which is sometimes needed for certain types of libraries).

Functional languages also simulate states by passing around immutable states. This can be done by making a function accept the state as one of its parameters, and return a new state together with the result, leaving the old state unchanged.

Impure functional languages usually include a more direct method of managing mutable state. Clojure, for example, uses managed references that can be updated by applying pure functions to the current state. This kind of approach enables mutability while still promoting the use of pure functions as the preferred way to express computations.

Alternative methods such as Hoare logic and uniqueness have been developed to track side effects in programs. Some modern research languages use effect systems to make the presence of side effects explicit.

Efficiency issues

Functional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal. This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware (which is a highly evolved Turing machine). Flat arrays may be accessed very efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with no complex pointer chasing), or handled with SIMD instructions. It is also not easy to create their equally efficient general-purpose immutable counterparts. For purely functional languages, the worst-case slowdown is logarithmic in the number of memory cells used, because mutable memory can be represented by a purely functional data structure with logarithmic access time (such as a balanced tree). However, such slowdowns are not universal. For programs that perform intensive numerical computations, functional languages such as OCaml and Clean are only slightly slower than C according to The Computer Language Benchmarks Game. For programs that handle large matrices and multidimensional databases, array functional languages (such as J and K) were designed with speed optimizations.

Immutability of data can in many cases lead to execution efficiency by allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities for inline expansion.

Lazy evaluation may also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introduce memory leaks if used improperly). Launchbury 1993 discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivan et al. 2008 give some practical advice for analyzing and fixing them. However, the most general implementations of lazy evaluation making extensive use of dereferenced code and data perform poorly on modern processors with deep pipelines and multi-level caches (where a cache miss may cost hundreds of cycles).

Coding styles

Imperative programs have the environment and a sequence of steps manipulating the environment. Functional programs have an expression that is successively substituted until it reaches normal form. An example illustrates this with different solutions to the same programming goal (calculating Fibonacci numbers).

PHP

Printing first 10 Fibonacci numbers, using function

function fib(int $n) : int {
    return ($n === 0 || $n === 1) ? $n : fib($n - 1) + fib($n - 2);
}

for ($i = 0; $i <= 10; $i++) echo fib($i) . PHP_EOL;

Printing first 10 Fibonacci numbers, using closure
 
$fib = function(int $n) use(&$fib) : int {
    return ($n === 0 || $n === 1) ? $n : $fib($n - 1) + $fib($n - 2);
};

for ($i = 0; $i <= 10; $i++) echo $fib($i) . PHP_EOL;

Printing a list with first 10 Fibonacci numbers, with generators
 
function fib(int $n) {
    yield 0; $n--;
    yield 1; $n--;
    $second = ($first = 2) + 1;
    while ($n-- !== 0) {
        yield $first;
        [$second, $first] = [$first + $second, $second];
    }
}

$fibo = fib(10);
foreach ($fibo as $value) {
    echo $value . PHP_EOL;
}

Python

Printing first 10 Fibonacci numbers, iterative
 
def fibonacci(n, first=0, second=1):
    for _ in range(n):
        print(first) # side-effect
        first, second = second, first + second # assignment
fibonacci(10)

Printing first 10 Fibonacci numbers, functional expression style
 
fibonacci = (lambda n, first=0, second=1:
    "" if n == 0 else
    str(first) + "\n" + fibonacci(n - 1, second, first + second))
print(fibonacci(10), end="")

Printing a list with first 10 Fibonacci numbers, with generators
 
def fibonacci(n, first=0, second=1):
    for _ in range(n):
        yield first
        first, second = second, first + second # assignment
print(list(fibonacci(10)))

Printing a list with first 10 Fibonacci numbers, functional expression style
 
fibonacci = (lambda n, first=0, second=1:
    [] if n == 0 else
    [first] + fibonacci(n - 1, second, first + second))
print(fibonacci(10))

Printing first 10 Fibonacci numbers, recursive style
 
def fibonacci(n):
    if n <= 1:
        return n
    else:
        return fibonacci(n-2) + fibonacci(n-1)

for n in range(10):
    print(fibonacci(n))

Haskell

Printing first 10 Fibonacci numbers, functional expression style

fibonacci_aux = \n first second->
    if n == 0 then "" else
    show first ++ "\n" ++ fibonacci_aux (n - 1) second (first + second)
fibonacci = \n-> fibonacci_aux n 0 1
main = putStr (fibonacci 10)

Printing a list with first 10 Fibonacci numbers, functional expression style

fibonacci_aux = \n first second->
    if n == 0 then [] else
    [first] ++ fibonacci_aux (n - 1) second (first + second)
fibonacci = \n-> fibonacci_aux n 0 1
main = putStrLn (show (fibonacci 10))

Printing the 11th Fibonacci number, functional expression style

fibonacci = \n-> if n == 0 then 0
                 else if n == 1 then 1
                      else fibonacci(n - 1) + fibonacci(n - 2)
main = putStrLn (show (fibonacci 10))

Printing the 11th Fibonacci number, functional expression style, tail recursive
 
fibonacci_aux = \n first second->
    if n == 0 then first else
    fibonacci_aux (n - 1) second (first + second)
fibonacci = \n-> Fibonacci_aux n 0 1
main = putStrLn (show (fibonacci 10))

Printing the 11th Fibonacci number, functional expression style with recursive lists
 
fibonacci_aux = \first second-> first : fibonacci_aux second (first + second)
select = \n zs-> if n==0 then head zs
                 else select (n - 1) (tail zs)
fibonacci = \n-> select n (fibonacci_aux 0 1)
main = putStrLn (show (fibonacci 10))

Printing the 11th Fibonacci number, functional expression style with primitives for recursive lists
 
fibonacci_aux = \first second-> first : fibonacci_aux second (first + second)
fibonacci = \n-> (fibonacci_aux 0 1) !! n
main = putStrLn (show (fibonacci 10))

Printing the 11th Fibonacci number, functional expression style with primitives for recursive lists, more concisely
 
fibonacci_aux = 0:1:zipWith (+) fibonacci_aux (tail fibonacci_aux)
fibonacci = \n-> fibonacci_aux !! n
main = putStrLn (show (fibonacci 10))

Printing the 11th Fibonacci number, functional declaration style
 
fibonacci 0 = 0
fibonacci 1 = 1
fibonacci n = fibonacci (n-1) + fibonacci (n-2)
main = putStrLn (show (fibonacci 10))

Printing the 11th Fibonacci number, functional declaration style, tail recursive
 
fibonacci_aux 0 first _ = first
fibonacci_aux n first second = fibonacci_aux (n - 1) second (first + second)
fibonacci n = fibonacci_aux n 0 1
main = putStrLn (show (fibonacci 10))

Printing the 11th Fibonacci number, functional declaration style, using lazy infinite lists and primitives
 
fibs = 1 : 1 : zipWith (+) fibs (tail fibs) 
-- an infinite list of the fibonacci numbers
-- fibs is defined in terms of fibs
fibonacci = (fibs !!)
main = putStrLn $ show $ fibonacci 11

Printing the first 10 Fibonacci numbers, list comprehension (generator) style
 
fibs = [0, 1] ++ [(fibs !! x) + (fibs !! (x + 1)) | x <- span=""> [0..]]
main = putStrLn $ show $ take 10 fibs

Perl 6

As influenced by Haskell and others, Perl 6 has several functional and declarative approaches to problems. For example, you can declaratively build up a well-typed recursive version (the type constraints are optional) through signature pattern matching:
 
# define constraints that are common to all candidates
proto fib ( UInt:D \n --> UInt:D ) {*}

multi fib ( 0 --> 0 ) { }
multi fib ( 1 --> 1 ) { }

multi fib ( \n ) {
    fib(n - 1) + fib(n - 2)
}

for ^10 -> $n { say fib($n) }

An alternative to this is to construct a lazy iterative sequence, which appears as an almost direct illustration of the sequence:
 
my @fib = 0, 1, *+* ... *; # Each additional entry is the sum of the previous two
                           # and this sequence extends lazily indefinitely
say @fib[^10];             # Display the first 10 entries

Erlang

Erlang is a functional, concurrent, general-purpose programming language. A Fibonacci algorithm implemented in Erlang (Note: This is only for demonstrating the Erlang syntax. Use other algorithms for fast performance):
 
-module(fib).    % This is the file 'fib.erl', the module and the filename must match
-export([fib/1]). % This exports the function 'fib' of arity 1

fib(1) -> 1; % If 1, then return 1, otherwise (note the semicolon ; meaning 'else')
fib(2) -> 1; % If 2, then return 1, otherwise
fib(N) -> fib(N - 2) + fib(N - 1).

Elixir

Elixir is a functional, concurrent, general-purpose programming language that runs on the Erlang virtual machine (BEAM).

The Fibonacci function can be written in Elixir as follows:
 
defmodule Fibonacci do
  def fib(0), do: 0
  def fib(1), do: 1
  def fib(n), do: fib(n-1) + fib(n-2)
end

Lisp

The Fibonacci function can be written in Common Lisp as follows:
 
(defun fib (n &optional (a 0) (b 1))
  (if (= n 0)
      a
      (fib (- n 1) b (+ a b))))

or
 
(defun fib (n)
  (if (or (= n 0) (= n 1))
      n
      (+ (fib (- n 1)) (fib (- n 2)))))

The program can then be called as
 
(fib 10)

Clojure

The Fibonacci function can be written in Clojure as follows:
 
(defn fib
  [n]
  (loop [a 0 b 1 i n]
    (if (zero? i)
      a
      (recur b (+ a b) (dec i)))))

The program can then be called as
 
(fib 7)

Explicitly using "lazy-seq", the infinite sequence of Fibonacci numbers can be defined recursively.
 
;; lazy infinite sequence
(def fibs (cons 0 (cons 1 (lazy-seq (map +' fibs (rest fibs))))))

;; list of first 10 Fibonacci numbers taken from infinite sequence
(take 10 fibs)

Kotlin

The Fibonacci function can be written in Kotlin as follows:
 
fun fib(x: Int): Int = if (x in 0..1) x else fib(x - 1) + fib(x - 2)

The program can then be called as
 
fib(7)

Swift

The Fibonacci function can be written in Swift as follows:
 
func fib(_ x: Int) -> Int {
    if x == 0 || x == 1 {
        return x
    } else {
        return fib(x - 1) + fib(x - 2)
    }
}

The function can then be called as
 
fib(7)

JavaScript

The Fibonacci function can be written in JavaScript as follows:
 
const fib = x => (x === 0 || x === 1) ? x : fib(x - 1) + fib(x - 2);

SequenceL

SequenceL is a functional, concurrent, general-purpose programming language. The Fibonacci function can be written in SequenceL as follows:
 
fib(n) := n when n < 2 else
          fib(n - 1) + fib(n - 2);

The function can then be called as
 
fib(10)

To reduce the memory consumed by the call stack when computing a large Fibonacci term, a tail-recursive version can be used. A tail-recursive function is implemented by the SequenceL compiler as a memory-efficient looping structure:
 
fib(n) := fib_Helper(0, 1, n);

fib_Helper(prev, next, n) :=
    prev when n < 1 else
    next when n = 1 else
    fib_Helper(next, next + prev, n - 1);

Ruby

The Fibonacci function can be written in ruby using lambdas as follows:
 
 fib = -> n { (n == 0 || n == 1) ? n : fib[n - 1] + fib[n - 2] }

Tcl

The Fibonacci function can be written in Tcl as a recursive function as follows:
 
proc fibo {x} {
    expr {$x<2? $x: [fibo [incr x -1]] + [fibo [incr x -2]]}
}

Scala

The Fibonacci function can be written in Scala in several ways:

Imperative "Java" style
 
def fibImp(n: Int): Int = {
  var i = 0
  var j = 1

  for (k <- span=""> 0 until n) {
    val l = i + j
    i = j
    j = l
  }
  i
}

Recursive style, slow
 
def fibRec(n: Int): Int = n match {
  case 0 => 0
  case 1 => 1
  case _ => fibRec(n - 1) + fibRec(n - 2)
}

Recursive style with tail call optimization, fast
 
def fibTailRec(n: Int): Int = {
  @tailrec
  def fib(a: Int, b: Int, c: Int): Int =
    if (a == 0) 0
    else if (a < 2) c
    else fib(a - 1, c, b + c)
  fib(n, 0, 1)
}

Using Scala streams
 
val fibStream: Stream[Int] =
  0 #:: 1 #:: (fibStream zip fibStream.tail).map(n => n._1 + n._2)

Use in industry

Functional programming has long been popular in academia, but with few industrial applications. However, recently several prominent functional programming languages have been used in commercial or industrial systems. For example, the Erlang programming language, which was developed by the Swedish company Ericsson in the late 1980s, was originally used to implement fault-tolerant telecommunications systems. It has since become popular for building a range of applications at companies such as Nortel, Facebook, Électricité de France and WhatsApp. The Scheme dialect of Lisp was used as the basis for several applications on early Apple Macintosh computers, and has more recently been applied to problems such as training simulation software and telescope control. OCaml, which was introduced in the mid-1990s, has seen commercial use in areas such as financial analysis, driver verification, industrial robot programming, and static analysis of embedded software. Haskell, though initially intended as a research language, has also been applied by a range of companies, in areas such as aerospace systems, hardware design, and web programming.

Other functional programming languages that have seen use in industry include Scala, F#, (both being functional-OO hybrids with support for both purely functional and imperative programming) Wolfram Language, Lisp, Standard ML, and Clojure.

In education

Functional programming is being used as a method to teach problem solving, algebra and geometric concepts. It has also been used as a tool to teach classical mechanics in Structure and Interpretation of Classical Mechanics.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...