From Wikipedia, the free encyclopedia
In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program.
In functional programming, functions are treated as first-class citizens, meaning that they can be bound to names (including local identifiers), passed as arguments, and returned from other functions, just as any other data type can. This allows programs to be written in a declarative and composable style, where small functions are combined in a modular manner.
Functional programming is sometimes treated as synonymous with purely functional programming, a subset of functional programming which treats all functions as deterministic mathematical functions, or pure functions.
When a pure function is called with some given arguments, it will
always return the same result, and cannot be affected by any mutable state or other side effects. This is in contrast with impure procedures, common in imperative programming,
which can have side effects (such as modifying the program's state or
taking input from a user). Proponents of purely functional programming
claim that by restricting side effects, programs can have fewer bugs, be easier to debug and test, and be more suited to formal verification.
Functional programming has its roots in academia, evolving from the lambda calculus,
a formal system of computation based only on functions. Functional
programming has historically been less popular than imperative
programming, but many functional languages are seeing use today in
industry and education, including Common Lisp, Scheme, Clojure, Wolfram Language, Racket, Erlang, Elixir, OCaml, Haskell, and F#. Lean is a functional programming language commonly used for verifying mathematical theorems. Functional programming is also key to some languages that have found success in specific domains, like JavaScript in the Web, R in statistics, J, K and Q in financial analysis, and XQuery/XSLT for XML. Domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, such as not allowing mutable values.
In addition, many other programming languages support programming in a
functional style or have implemented features from functional
programming, such as C++11, C#, Kotlin, Perl, PHP, Python, Go, Rust, Raku, Scala, and Java (since Java 8).
History
The lambda calculus, developed in the 1930s by Alonzo Church, is a formal system of computation built from function application. In 1937 Alan Turing proved that the lambda calculus and Turing machines are equivalent models of computation, showing that the lambda calculus is Turing complete. Lambda calculus forms the basis of all functional programming languages. An equivalent theoretical formulation, combinatory logic, was developed by Moses Schönfinkel and Haskell Curry in the 1920s and 1930s.
Church later developed a weaker system, the simply-typed lambda calculus, which extended the lambda calculus by assigning a data type to all terms. This forms the basis for statically typed functional programming.
The first high-level functional programming language, Lisp, was developed in the late 1950s for the IBM 700/7000 series of scientific computers by John McCarthy while at Massachusetts Institute of Technology (MIT). Lisp functions were defined using Church's lambda notation, extended with a label construct to allow recursive functions. Lisp first introduced many paradigmatic features of functional programming, though early Lisps were multi-paradigm languages, and incorporated support for numerous programming styles as new paradigms evolved. Later dialects, such as Scheme and Clojure, and offshoots such as Dylan and Julia, sought to simplify and rationalise Lisp around a cleanly functional core, while Common Lisp was designed to preserve and update the paradigmatic features of the numerous older dialects it replaced.
Information Processing Language (IPL), 1956, is sometimes cited as the first computer-based functional programming language. It is an assembly-style language for manipulating lists of symbols. It does have a notion of generator,
which amounts to a function that accepts a function as an argument,
and, since it is an assembly-level language, code can be data, so IPL
can be regarded as having higher-order functions. However, it relies
heavily on the mutating list structure and similar imperative features.
Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book A Programming Language (ISBN 9780471430148). APL was the primary influence on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the mid-1990s, Arthur Whitney, who had previously worked with Iverson, created K, which is used commercially in financial industries along with its descendant Q.
In the mid-1960s, Peter Landin invented SECD machine, the first abstract machine for a functional programming language, described a correspondence between ALGOL 60 and the lambda calculus, and proposed the ISWIM programming language.
John Backus presented FP in his 1977 Turing Award lecture "Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs".
He defines functional programs as being built up in a hierarchical way
by means of "combining forms" that allow an "algebra of programs"; in
modern language, this means that functional programs follow the principle of compositionality. Backus's paper popularized research into functional programming, though it emphasized function-level programming rather than the lambda-calculus style now associated with functional programming.
The 1973 language ML was created by Robin Milner at the University of Edinburgh, and David Turner developed the language SASL at the University of St Andrews. Also in Edinburgh in the 1970s, Burstall and Darlington developed the functional language NPL. NPL was based on Kleene Recursion Equations and was first introduced in their work on program transformation. Burstall, MacQueen and Sannella then incorporated the polymorphic type checking from ML to produce the language Hope. ML eventually developed into several dialects, the most common of which are now OCaml and Standard ML.
In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as described in the Lambda Papers and the 1985 textbook Structure and Interpretation of Computer Programs. Scheme was the first dialect of lisp to use lexical scoping and to require tail-call optimization, features that encourage functional programming.
In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called constructive type theory), which associated functional programs with constructive proofs expressed as dependent types. This led to new approaches to interactive theorem proving and has influenced the development of subsequent functional programming languages.
The lazy functional language, Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence on Haskell. With Miranda being proprietary, Haskell began with a consensus in 1987 to form an open standard for functional programming research; implementation releases have been ongoing as of 1990.
More recently it has found use in niches such as parametric CAD in the OpenSCAD language built on the CGAL
framework, although its restriction on reassigning values (all values
are treated as constants) has led to confusion among users who are
unfamiliar with functional programming as a concept.
Functional programming continues to be used in commercial settings.
Concepts
A number of concepts and paradigms are specific to functional programming, and generally foreign to imperative programming (including object-oriented programming).
However, programming languages often cater to several programming
paradigms, so programmers using "mostly imperative" languages may have
utilized some of these concepts.
First-class and higher-order functions
Higher-order functions
are functions that can either take other functions as arguments or
return them as results. In calculus, an example of a higher-order
function is the differential operator , which returns the derivative of a function .
Higher-order functions are closely related to first-class functions
in that higher-order functions and first-class functions both allow
functions as arguments and results of other functions. The distinction
between the two is subtle: "higher-order" describes a mathematical
concept of functions that operate on other functions, while
"first-class" is a computer science term for programming language
entities that have no restriction on their use (thus first-class
functions can appear anywhere in the program that other first-class
entities like numbers can, including as arguments to other functions and
as their return values).
Higher-order functions enable partial application or currying,
a technique that applies a function to its arguments one at a time,
with each application returning a new function that accepts the next
argument. This lets a programmer succinctly express, for example, the successor function as the addition operator partially applied to the natural number one.
Pure functions
Pure functions (or expressions) have no side effects (memory or I/O). This means that pure functions have several useful properties, many of which can be used to optimize the code:
- If the result of a pure expression is not used, it can be removed without affecting other expressions.
- If a pure function is called with arguments that cause no
side-effects, the result is constant with respect to that argument list
(sometimes called referential transparency or idempotence),
i.e., calling the pure function again with the same arguments returns
the same result. (This can enable caching optimizations such as memoization.)
- If there is no data dependency between two pure expressions, their order can be reversed, or they can be performed in parallel and they cannot interfere with one another (in other terms, the evaluation of any pure expression is thread-safe).
- If the entire language does not allow side-effects, then any
evaluation strategy can be used; this gives the compiler freedom to
reorder or combine the evaluation of expressions in a program (for
example, using deforestation).
While most compilers for imperative programming languages detect pure
functions and perform common-subexpression elimination for pure
function calls, they cannot always do this for pre-compiled libraries,
which generally do not expose this information, thus preventing
optimizations that involve those external functions. Some compilers,
such as gcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations. Fortran 95 also lets functions be designated pure. C++11 added constexpr
keyword with similar semantics.
Recursion
Iteration (looping) in functional languages is usually accomplished via recursion. Recursive functions invoke themselves, letting an operation be repeated until it reaches the base case. In general, recursion requires maintaining a stack,
which consumes space in a linear amount to the depth of recursion. This
could make recursion prohibitively expensive to use instead of
imperative loops. However, a special form of recursion known as tail recursion
can be recognized and optimized by a compiler into the same code used
to implement iteration in imperative languages. Tail recursion
optimization can be implemented by transforming the program into continuation passing style during compiling, among other approaches.
The Scheme
language standard requires implementations to support proper tail
recursion, meaning they must allow an unbounded number of active tail
calls.
Proper tail recursion is not simply an optimization; it is a language
feature that assures users that they can use recursion to express a loop
and doing so would be safe-for-space.
Moreover, contrary to its name, it accounts for all tail calls, not
just tail recursion. While proper tail recursion is usually implemented
by turning code into imperative loops, implementations might implement
it in other ways. For example, Chicken intentionally maintains a stack and lets the stack overflow. However, when this happens, its garbage collector will claim space back, allowing an unbounded number of active tail calls even though it does not turn tail recursion into a loop.
Common patterns of recursion can be abstracted away using higher-order functions, with catamorphisms and anamorphisms
(or "folds" and "unfolds") being the most obvious examples. Such
recursion schemes play a role analogous to built-in control structures
such as loops in imperative languages.
Most general purpose functional programming languages allow unrestricted recursion and are Turing complete, which makes the halting problem undecidable, can cause unsoundness of equational reasoning, and generally requires the introduction of inconsistency into the logic expressed by the language's type system. Some special purpose languages such as Coq allow only well-founded recursion and are strongly normalizing (nonterminating computations can be expressed only with infinite streams of values called codata).
As a consequence, these languages fail to be Turing complete and
expressing certain functions in them is impossible, but they can still
express a wide class of interesting computations while avoiding the
problems introduced by unrestricted recursion. Functional programming
limited to well-founded recursion with a few other constraints is called
total functional programming.
Strict versus non-strict evaluation
Functional languages can be categorized by whether they use strict (eager) or non-strict (lazy)
evaluation, concepts that refer to how function arguments are processed
when an expression is being evaluated. The technical difference is in
the denotational semantics
of expressions containing failing or divergent computations. Under
strict evaluation, the evaluation of any term containing a failing
subterm fails. For example, the expression:
print length([2+1, 3*2, 1/0, 5-4])
fails under strict evaluation because of the division by zero in the
third element of the list. Under lazy evaluation, the length function
returns the value 4 (i.e., the number of items in the list), since
evaluating it does not attempt to evaluate the terms making up the list.
In brief, strict evaluation always fully evaluates function arguments
before invoking the function. Lazy evaluation does not evaluate function
arguments unless their values are required to evaluate the function
call itself.
The usual implementation strategy for lazy evaluation in functional languages is graph reduction. Lazy evaluation is used by default in several pure functional languages, including Miranda, Clean, and Haskell.
Hughes 1984 argues for lazy evaluation as a mechanism for improving program modularity through separation of concerns, by easing independent implementation of producers and consumers of data streams.
Launchbury 1993 describes some difficulties that lazy evaluation
introduces, particularly in analyzing a program's storage requirements,
and proposes an operational semantics to aid in such analysis.
Harper 2009 proposes including both strict and lazy evaluation in the
same language, using the language's type system to distinguish them.
Type systems
Especially since the development of Hindley–Milner type inference in the 1970s, functional programming languages have tended to use typed lambda calculus, rejecting all invalid programs at compilation time and risking false positive errors, as opposed to the untyped lambda calculus, that accepts all valid programs at compilation time and risks false negative errors, used in Lisp and its variants (such as Scheme), as they reject all invalid programs at runtime when the information is enough to not reject valid programs. The use of algebraic datatypes
makes manipulation of complex data structures convenient; the presence
of strong compile-time type checking makes programs more reliable in
absence of other reliability techniques like test-driven development, while type inference frees the programmer from the need to manually declare types to the compiler in most cases.
Some research-oriented functional languages such as Coq, Agda, Cayenne, and Epigram are based on intuitionistic type theory, which lets types depend on terms. Such types are called dependent types. These type systems do not have decidable type inference and are difficult to understand and program with. But dependent types can express arbitrary propositions in higher-order logic. Through the Curry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formal mathematical proofs from which a compiler can generate certified code. While these languages are mainly of interest in academic research (including in formalized mathematics), they have begun to be used in engineering as well. Compcert is a compiler for a subset of the C programming language that is written in Coq and formally verified.
A limited form of dependent types called generalized algebraic data types
(GADT's) can be implemented in a way that provides some of the benefits
of dependently typed programming while avoiding most of its
inconvenience. GADT's are available in the Glasgow Haskell Compiler, in OCaml and in Scala, and have been proposed as additions to other languages including Java and C#.
Referential transparency
Functional programs do not have assignment statements, that is, the
value of a variable in a functional program never changes once defined.
This eliminates any chances of side effects because any variable can be
replaced with its actual value at any point of execution. So, functional
programs are referentially transparent.
Consider C assignment statement x=x * 10
, this changes the value assigned to the variable x
. Let us say that the initial value of x
was 1
, then two consecutive evaluations of the variable x
yields 10
and 100
respectively. Clearly, replacing x=x * 10
with either 10
or 100
gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent.
Now, consider another function such as int plusone(int x) {return x+1;}
is transparent, as it does not implicitly change the input x and thus has no such side effects.
Functional programs exclusively use this type of function and are therefore referentially transparent.
Data structures
Purely functional data structures are often represented in a different way to their imperative counterparts. For example, the array
with constant access and update times is a basic component of most
imperative languages, and many imperative data-structures, such as the hash table and binary heap, are based on arrays. Arrays can be replaced by maps or random access lists, which admit purely functional implementation, but have logarithmic access and update times. Purely functional data structures have persistence,
a property of keeping previous versions of the data structure
unmodified. In Clojure, persistent data structures are used as
functional alternatives to their imperative counterparts. Persistent
vectors, for example, use trees for partial updating. Calling the insert
method will result in some but not all nodes being created.
Comparison to imperative programming
Functional programming is very different from imperative programming. The most significant differences stem from the fact that functional programming avoids side effects,
which are used in imperative programming to implement state and I/O.
Pure functional programming completely prevents side-effects and
provides referential transparency.
Higher-order functions are rarely used in older imperative
programming. A traditional imperative program might use a loop to
traverse and modify a list. A functional program, on the other hand,
would probably use a higher-order "map" function that takes a function
and a list, generating and returning a new list by applying the function
to each list item.
Imperative vs. functional programming
The following two examples (written in JavaScript)
achieve the same effect: they multiply all even numbers in an array by
10 and add them all, storing the final sum in the variable "result".
Traditional Imperative Loop:
const numList = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
let result = 0;
for (let i = 0; i < numList.length; i++) {
if (numList[i] % 2 === 0) {
result += numList[i] * 10;
}
}
Functional Programming with higher-order functions:
const result = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
.filter(n => n % 2 === 0)
.map(a => a * 10)
.reduce((a, b) => a + b, 0);
Sometimes the abstractions offered by functional
programming might lead to development of more robust code that avoids
certain issues that might arise when building upon large amount of
complex, imperative code, such as off-by-one errors (see Greenspun's tenth rule).
Simulating state
There
are tasks (for example, maintaining a bank account balance) that often
seem most naturally implemented with state. Pure functional programming
performs these tasks, and I/O tasks such as accepting user input and
printing to the screen, in a different way.
The pure functional programming language Haskell implements them using monads, derived from category theory.
Monads offer a way to abstract certain types of computational patterns,
including (but not limited to) modeling of computations with mutable
state (and other side effects such as I/O) in an imperative manner
without losing purity. While existing monads may be easy to apply in a
program, given appropriate templates and examples, many students find
them difficult to understand conceptually, e.g., when asked to define
new monads (which is sometimes needed for certain types of libraries).
Functional languages also simulate states by passing around
immutable states. This can be done by making a function accept the state
as one of its parameters, and return a new state together with the
result, leaving the old state unchanged.
Impure functional languages usually include a more direct method of managing mutable state. Clojure,
for example, uses managed references that can be updated by applying
pure functions to the current state. This kind of approach enables
mutability while still promoting the use of pure functions as the
preferred way to express computations.
Alternative methods such as Hoare logic and uniqueness have been developed to track side effects in programs. Some modern research languages use effect systems to make the presence of side effects explicit.
Efficiency issues
Functional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal.
This is related to the fact that some mutable data structures like
arrays have a very straightforward implementation using present
hardware. Flat arrays may be accessed very efficiently with deeply
pipelined CPUs, prefetched efficiently through caches (with no complex pointer chasing),
or handled with SIMD instructions. It is also not easy to create their
equally efficient general-purpose immutable counterparts. For purely
functional languages, the worst-case slowdown is logarithmic in the
number of memory cells used, because mutable memory can be represented
by a purely functional data structure with logarithmic access time (such
as a balanced tree).
However, such slowdowns are not universal. For programs that perform
intensive numerical computations, functional languages such as OCaml and Clean are only slightly slower than C according to The Computer Language Benchmarks Game. For programs that handle large matrices and multidimensional databases, array functional languages (such as J and K) were designed with speed optimizations.
Immutability of data can in many cases lead to execution
efficiency by allowing the compiler to make assumptions that are unsafe
in an imperative language, thus increasing opportunities for inline expansion.
Even if the involved copying that may seem implicit when dealing with
persistent immutable data structures might seem computationally costly,
some functional programming languages, like Clojure solve this issue by implementing mechanisms for safe memory sharing between formally immutable data. Rust distinguishes itself by its approach to data immutability which involves immutable references and a concept called lifetimes.
Immutable data with separation of identity and state and shared-nothing schemes can also potentially be more well-suited for concurrent and parallel
programming by the virtue of reducing or eliminating the risk of
certain concurrency hazards, since concurrent operations are usually atomic and this allows eliminating the need for locks. This is how for example java.util.concurrent
classes are implemented, where some of them are immutable variants of
the corresponding classes that are not suitable for concurrent use. Functional programming languages often have a concurrency model that instead of shared state and synchronization, leverages message passing mechanisms (such as the actor model, where each actor is a container for state, behavior, child actors and a message queue). This approach is common in Erlang/Elixir or Akka.
Lazy evaluation
may also speed up the program, even asymptotically, whereas it may slow
it down at most by a constant factor (however, it may introduce memory leaks if used improperly). Launchbury 1993 discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivan et al. 2008
give some practical advice for analyzing and fixing them.
However, the most general implementations of lazy evaluation making
extensive use of dereferenced code and data perform poorly on modern
processors with deep pipelines and multi-level caches (where a cache
miss may cost hundreds of cycles).
Abstraction cost
Some functional programming languages might not optimize abstractions such as higher order functions like "map" or "filter"
as efficiently as the underlying imperative operations. Consider, as an
example, the following two ways to check if 5 is an even number in Clojure:
(even? 5)
(.equals (mod 5 2) 0)
When benchmarked using the Criterium tool on a Ryzen 7900X GNU/Linux PC in a Leiningen REPL 2.11.2, running on Java VM version 22 and Clojure version 1.11.1, the first implementation, which is implemented as:
(defn even?
"Returns true if n is even, throws an exception if n is not an integer"
{:added "1.0"
:static true}
[n] (if (integer? n)
(zero? (bit-and (clojure.lang.RT/uncheckedLongCast n) 1))
(throw (IllegalArgumentException. (str "Argument must be an integer: " n)))))
has the mean execution time of 4.76 nanoseconds, while the second one, in which .equals
is a direct invocation of the underlying Java
method, has a mean execution time of 2.8 μs – roughly 1200 times
faster. Part of that can be attributed to the type checking and
exception handling involved in the implementation of even?
, so let's take for instance the lo library for Go, which implements various higher-order functions common in functional programming languages using generics. In a benchmark provided by the library's author, calling map
is 4% slower than an equivalent for
loop and has the same allocation profile, which can be attributed to various compiler optimizations, such as inlining.
One distinguishing feature of Rust are zero-cost abstractions. This means that using them imposes no additional runtime overhead. This is achieved thanks to the compiler using loop unrolling, where each iteration of a loop, be it imperative or using iterators, is converted into a standalone Assembly
instruction, without the overhead of the loop controlling code. If an
iterative operation writes to an array, the resulting array's elements will be stored in specific CPU registers, allowing for constant-time access at runtime.
Functional programming in non-functional languages
It is possible to use a functional style of programming in languages that are not traditionally considered functional languages. For example, both D and Fortran 95 explicitly support pure functions.
JavaScript, Lua, Python and Go had first class functions from their inception. Python had support for "lambda", "map", "reduce", and "filter" in 1994, as well as closures in Python 2.2, though Python 3 relegated "reduce" to the functools
standard library module. First-class functions have been introduced into other mainstream languages such as PHP 5.3, Visual Basic 9, C# 3.0, C++11, and Kotlin.
In PHP, anonymous classes, closures
and lambdas are fully supported. Libraries and language extensions for
immutable data structures are being developed to aid programming in the
functional style.
In Java, anonymous classes can sometimes be used to simulate closures; however, anonymous classes are not always proper replacements to closures because they have more limited capabilities. Java 8 supports lambda expressions as a replacement for some anonymous classes.
In C#,
anonymous classes are not necessary, because closures and lambdas are
fully supported. Libraries and language extensions for immutable data
structures are being developed to aid programming in the functional
style in C#.
Many object-oriented design patterns are expressible in functional programming terms: for example, the strategy pattern simply dictates use of a higher-order function, and the visitor pattern roughly corresponds to a catamorphism, or fold.
Similarly, the idea of immutable data from functional programming is often included in imperative programming languages, for example the tuple in Python, which is an immutable array, and Object.freeze() in JavaScript.
Comparison to logic programming
Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations.
For example, the function, mother(X) = Y, (every X has only one mother
Y) can be represented by the relation mother(X, Y). Whereas functions
have a strict input-output pattern of arguments, relations can be
queried with any pattern of inputs and outputs. Consider the following
logic program:
mother(charles, elizabeth).
mother(harry, diana).
The program can be queried, like a functional program, to generate mothers from children:
?- mother(harry, X).
X = diana.
?- mother(charles, X).
X = elizabeth.
But it can also be queried backwards, to generate children:
?- mother(X, elizabeth).
X = charles.
?- mother(X, diana).
X = harry.
It can even be used to generate all instances of the mother relation:
?- mother(X, Y).
X = charles,
Y = elizabeth.
X = harry,
Y = diana.
Compared with relational syntax, functional syntax is a more compact
notation for nested functions. For example, the definition of maternal
grandmother in functional syntax can be written in the nested form:
maternal_grandmother(X) = mother(mother(X)).
The same definition in relational notation needs to be written in the unnested form:
maternal_grandmother(X, Y) :- mother(X, Z), mother(Z, Y).
Here :-
means if and ,
means and.
However, the difference between the two representations is simply syntactic. In Ciao Prolog, relations can be nested, like functions in functional programming:
grandparent(X) := parent(parent(X)).
parent(X) := mother(X).
parent(X) := father(X).
mother(charles) := elizabeth.
father(charles) := phillip.
mother(harry) := diana.
father(harry) := charles.
?- grandparent(X,Y).
X = harry,
Y = elizabeth.
X = harry,
Y = phillip.
Ciao transforms the function-like notation into relational form and
executes the resulting logic program using the standard Prolog execution
strategy.
Applications
Text editors
Emacs, a highly extensible text editor family uses its own Lisp dialect for writing plugins. The original author of the most popular Emacs implementation, GNU Emacs and Emacs Lisp, Richard Stallman considers Lisp one of his favorite programming languages.
Helix, since version 24.03 supports previewing AST as S-expressions, which are also the core feature of the Lisp programming language family.
Spreadsheets
Spreadsheets can be considered a form of pure, zeroth-order, strict-evaluation functional programming system.
However, spreadsheets generally lack higher-order functions as well as
code reuse, and in some implementations, also lack recursion. Several
extensions have been developed for spreadsheet programs to enable
higher-order and reusable functions, but so far remain primarily
academic in nature.
Academia
Functional programming is an active area of research in the field of programming language theory. There are several peer-reviewed publication venues focusing on functional programming, including the International Conference on Functional Programming, the Journal of Functional Programming, and the Symposium on Trends in Functional Programming.
Industry
Functional programming has been employed in a wide range of industrial applications. For example, Erlang, which was developed by the Swedish company Ericsson in the late 1980s, was originally used to implement fault-tolerant telecommunications systems, but has since become popular for building a range of applications at companies such as Nortel, Facebook, Électricité de France and WhatsApp. Scheme, a dialect of Lisp, was used as the basis for several applications on early Apple Macintosh computers and has been applied to problems such as training-simulation software and telescope control. OCaml, which was introduced in the mid-1990s, has seen commercial use in areas such as financial analysis, driver verification, industrial robot programming and static analysis of embedded software. Haskell, though initially intended as a research language, has also been applied in areas such as aerospace systems, hardware design and web programming.
Other functional programming languages that have seen use in industry include Scala, F#, Wolfram Language, Lisp, Standard ML and Clojure. Scala has been widely used in Data science, while ClojureScript, Elm or PureScript are some of the functional frontend programming languages used in production. Elixir's Phoenix framework is also used by some relatively popular commercial projects, such as Font Awesome or Allegro (one of the biggest e-commerce platforms in Poland)'s classified ads platform Allegro Lokalnie.
Functional "platforms" have been popular in finance for risk
analytics (particularly with large investment banks). Risk factors are
coded as functions that form interdependent graphs (categories) to
measure correlations in market shifts, similar in manner to Gröbner basis optimizations but also for regulatory frameworks such as Comprehensive Capital Analysis and Review. Given the use of OCaml and Caml variations in finance, these systems are sometimes considered related to a categorical abstract machine. Functional programming is heavily influenced by category theory.
Education
Many universities teach functional programming. Some treat it as an introductory programming concept while others first teach imperative programming methods.
Outside of computer science, functional programming is used to teach problem-solving, algebraic and geometric concepts. It has also been used to teach classical mechanics, as in the book Structure and Interpretation of Classical Mechanics.
In particular, Scheme has been a relatively popular choice for teaching programming for years.