Search This Blog

Thursday, June 11, 2020

Conservation science (cultural heritage)

From Wikipedia, the free encyclopedia
 
With respect to cultural heritage, conservation science is the interdisciplinary study of the conservation of art, architecture, technical art history and other cultural works through the use of scientific inquiry. General areas of research include the technology and structure of artistic and historic works. In other words, the materials and techniques from which cultural, artistic and historic objects are made. There are three broad categories of conservation science with respect to cultural heritage: 1) understanding the materials and techniques used by artists, 2) study of the causes of deterioration, and 3) improving methods/techniques and materials for examination and treatment. Conservation science includes aspects of chemistry, physics and biology, engineering, as well as art history and anthropology. Institutions such as the Getty Conservation Institute specialize in publishing and disseminating information relating to both tools used for and outcomes of conservation science research, as well as recent discoveries in the field.

Introduction

Prior to thorough scientific analysis, a detailed visual assessment of the object, heritage site, or artwork is necessary in addition to gathering all relevant historic and current documentation. Diagnosing the current state in a non-invasive way allows both conservators and conservation scientists to determine exactly what further analysis would be required and whether the subject of the study will be able to withstand more rigorous examination. Additionally, since the goal of conservation-restoration is to only do the minimum required for preservation, this initial assessment falls in line with the American Institute for Conservation (AIC) Code of Ethics which outlines best practice for conservators and scientists alike. 

Along with assessing the current state and potential risk of future deterioration of artworks and objects, scientific study may be necessary to determine if there is risk to the conservators themselves. For example, some pigments used in paintings contain highly toxic elements such as arsenic or lead and could be hazardous to those working with them. Alternatively, previous restoration efforts may have involved chemicals that are now known to have dangerous side affects with prolonged exposure. In these cases, conservation science may reveal the nature of these hazards as well as present solutions for how to prevent current and future exposure.

Material properties

Research into the chemical and physical properties intrinsic to the materials used to create cultural heritage objects is a large part of the study of conservation science. Materials science, in conjunction with the broader field of restoration and preservation, has resulted in what is now recognized as modern conservation. Using analytical techniques and tools, conservation scientists are able to determine what makes up a particular object or artwork. In turn, this knowledge informs how deterioration is likely to occur due to both environmental effects and the inherent traits of that given material. The necessary environment to maintain or prolong the current state of that material, and which treatments will have the least amount of reaction and impact on the materials of the objects being studied, are the primary goals of conservation research. Conservation treatments fall under four broad categories including cleaning, desalination, consolidation, and deinfestation. Knowledge of the material properties of cultural heritage and how they deteriorate over time helps conservators formulate actions to preserve and conserve cultural heritage.

In many countries, including the United Kingdom and Italy, conservation science is considered part of the broader field called 'Heritage Science' which also encompasses scientific aspects less directly related to cultural heritage conservation, as well its management and interpretation.

Paper

The majority of paper is made up of cellulose fibers. The deterioration of paper may be the result of pests such as vermin, insects, and microbes, or by theft, fire, and flood. More specifically, paper deteriorates from two mechanisms that alter its hue and weaken its fibers: acid-catalyzed hydrolysis and oxidation. Treatment for paper includes deacidification, bleaching and washing.

Safe environments for the storage and display of paper artifacts include having a relative humidity (RH) of below 65% and above 40% and an ideal temperature between 18-20 °C (64-68 °F).

Textiles

Textiles are woven fabrics or cloth that represent culture, material legacy of international trade, social history, agricultural development, artistic trends, and technological progress. There are four main material sources: animal, plant, mineral, and synthetic. Deterioration of textiles can be caused by exposure to ultraviolet (UV) or infrared light (IR), incorrect relative humidity and temperature, pests, pollutants, and physical forces such as fire and water. Textiles may be treated in a number of ways including vacuuming, wet cleaning, dry cleaning, steaming, and ironing. To preserve the integrity of textiles, storage and display environments result in as little light exposure as possible. Safe environments for textiles include those with a temperature of around 21 °C (70 °F) and relative humidity of 50%.

Leather

Leather is a manufactured product made from the skin of animals. Leather can deteriorate from red rot, excessive dryness resulting in cracking and breakage, fading from exposure to light, mold resulting in odors, stains, and distortion, and insects and dust, both of which can cause holes and abrasions. Corrosion can also occur when leather comes into contact with metals. There are two primary methods for leather conservation: application of dressings or treatments to prolong the life of the leather and improving the means by which leather is stored. The second method is a preventive approach while the first, an older method, is an interventive approach. Leather artifacts are best stored with relative humidity between 45% to 55% and a temperature of 18-20 °C (64-68 °F).

Glass and ceramics

Glass and ceramics can be maintained for much longer periods of time and are two of the most durable materials. The biggest risk to glass and ceramics is breakage, however improper display and storage can lead to stains and discoloration. Ceramics can become stained from inappropriate cleaning and repair while porous or cracked ceramics can develop stains from being soaked in water during cleaning. Increased temperatures can cause darkening of already existing stains and can lead to cracks. Glass can become damaged from 'weeping glass' wherein droplets of moisture form on glass surfaces. This can lead to a leaching out of unstable components that produce an alkaline solution. If allowed to remain on the glass for an extended period of time, this solution can produce fine cracks known as crizzling. Careful handling and storage is the surest means to preventing damage to glass and ceramics. The below table displays recommended storage conditions for damaged and unstable objects:

Weeping glass Temperature and relative humidity 18-21 °C (65-70 °F), 40%
Crizzling glass Temperature and relative humidity 18-21 °C (65-70 °F), 55%
Archaeological ceramics Temperature and relative humidity 18-21 °C (65-70 °F), 45%

Metals

Metals are produced from ores that are found naturally in the environment. Most metal objects are made from a combination of individual metals called alloys and exhibit different strengths and colors based on their composition. Metals and alloys commonly found in cultural objects include gold, silver, copper, pewter, tin, and iron. The most common form of deterioration for metal is corrosion. Corrosion occurs when metals come into contact with water, acids, bases, salts, oils, polishes, pollutants and chemicals. Mechanical damage, breakage, dents, and scratches can occur from mishandling metal objects and result in damage to the metal object. Over polishing can lead to deterioration and potentially misidentification by removing plating, decoration, makers' marks, or engravings. Mechanical, electrical, and chemical interventions are often used in the treatment of metals. Appropriate storage of metal objects helps to increase their longevity; it is recommended that metal objects be stored in closed systems with well-sealed doors and drawers with relative humidity between 35 and 55%.

Plastics

Plastics experience degradation from several factors including light, ultraviolet radiation, oxygen, water, heat, and pollutants. There are no international standards for the storage of plastics so it is common for museums to employ similar methods to those used to preserve paper and other organic materials. A wide range of instruments and techniques can be used in the treatment of plastics including 3-D scanning and printing technologies as a means of reproducing broken or missing parts. Recommended relative humidity for plastics is 50% along with a temperature of 18–20 °C (64-68 °F).

Stone

Stone objects take on many forms including sculpture, architecture, ornamental decoration, or functional pieces. Deterioration of stone depends on several factors such as the type of stone, geographical or physical location, and maintenance. Stone is subject to a number of decay mechanisms that include environmental, mechanical, and applied decay. Erosion from air, water, and physical touch can wear away surface texture. Carved stone should not be regularly cleaned as cleaning can cause deterioration by opening its pores as well as removing surface features such as engravings, artists' tools, and historical marks. Dirt, moss, and lichen do not usually cause decay to stone but may add to its patina.

Wood

Wood is a biodegradable, organic material that is susceptible to deterioration from both living organisms and environmental factors. Some ancient wood is recognized for its archaeological value and falls into two categories: dry and waterlogged. The recommended temperature for storage and display of wooden artifacts is 21 °C (70 °F) during the winter months and 21-24 °C (70-75 °F) during the summer months. The recommended relative humidity for storage and display of wooden artifacts during the winter months is 35%-45% and 55%-65% during the summer months. Effective cleaning of wooden artifacts includes waxing, polishing, dusting, and buffing.

Paintings

Painting materials include acrylic paint, oil paint, egg tempera, lacquer, water color, and gouache. Conservation techniques for paintings include dirt and varnish removal, consolidation, structural treatments, in-painting, in-filling, and retouching of losses. It is recommended that paintings be stored with other heritage and art collections.

Mechanisms of deterioration

Conservation science studies the process by which the various mechanisms of deterioration cause changes to material culture that affect their longevity for future generations. These mechanisms may produce chemical, physical, or biological changes and differ based on the material properties of the subject at hand. A large portion of conservation science research is the study of the behavior of different materials under a range of environmental conditions. One method used by scientists is to artificially age objects in order to study what conditions cause or mitigate deterioration. The results of these investigations informs the field on the major risk factors as well as the strategies to control and monitor environmental conditions to aid in long term preservation. Further, scientific inquiry has led to the development of more stable and long-term treatment methods and techniques for the types of damages that do occur.

Fire

Fire is caused by chemical reactions resulting in combustion. Organic material such as paper, textiles, and wood are especially susceptible to combustion. Inorganic material, while less susceptible, may still suffer damage if exposed to fire for any period of time. The materials used to extinguish fires, such as chemical retardants or water, can also result in further damage to material culture.

Water

Water primarily causes physical changes such as warping, stains, discoloration, and other weakening to both inorganic and organic materials. Water can come from natural sources such as flooding, mechanical/technological failures, or human error. Water damage to organic material may lead to the growth of other pests such as mold. In addition to the physical effects of water directly on an object or artwork, moisture in the air directly affects relative humidity which can in turn exacerbate deterioration and damage.

Light

Light causes cumulative and irreversible damage to light-sensitive objects. The energy from light interacts with objects at the molecular level and can lead to both physical and chemical damage such as fading, darkening, yellowing, embrittlement, and stiffening. Ultraviolet radiation and Infrared radiation, in addition to visible light, can be emitted from light sources and can also be damaging to material culture. Cultural institutions are tasked with finding the balance between needing light for patrons and guests and exposure to the collection. Any amount of light can be damaging to a variety of objects and artworks and the effects are cumulative and irreversible. Conservation science has helped establish 50 Lux as the benchmark level of light intensity that allows the human eye to operate within the full range the visible light spectrum. While this is a baseline for many museums, adjustments are often needed for based on specific situations. Conservation science has informed the industry on the levels of light sensitivity of common materials used in material culture and the length of time permissible before deterioration is likely to occur. Control strategies must be considered on an item by item basis. Light, ultraviolet, and thermometers for infrared radiation are some of the tools used to detect when levels fall outside of an acceptable range.

Incorrect relative humidity

Relative humidity (RH) is the measure of the humidity, or the water vapor content, in relation to the atmosphere and ranges from damp to dry. Material properties determine the affect that different levels of RH can have on any particular item. Organic materials like wood, paper, and leather, as well as some inorganic material like metals are susceptible to damage from incorrect RH. Damage ranges from physical changes like cracking and warping of organic materials to chemical reactions like corrosion of metals. Temperature has a direct effect on relative humidity: as warm air cools, relative humidity increases and as cool air warms up, relative humidity falls. Dampness can cause the growth of mold which has its own damaging properties. Research in the field has determined the various ranges and fluctuations of incorrect humidity, the sensitivity of various objects to each one, and has helped establish guidelines for proper environmental conditions specific to the objects in question.

Incorrect temperature

Material properties directly determine the appropriate temperature needed to preserve that item. Incorrect temperatures, whether too high, too low, or fluctuating between the two, can cause varying levels of deterioration for objects. Temperatures that are too high can lead to chemical and physical damage such as embrittlement, cracking, fading, and disintegration. Too high temperatures can also promote biological reactions like mold growth. Temperatures that are too low can also result in physical damages such as embrittlement and cracking. Temperature fluctuations can cause materials to expand and contract rapidly which causes stress to build up within the material and eventual deterioration over time.

Pests

Pests include microorganisms, insects, and rodents and are able to disfigure, damage, and destroy material culture. Both organic material and inorganic material are highly susceptible. Damage can occur from pests consuming, burrowing into, and excreting on material. The presence of pests can be the result of other deterioration mechanisms such as incorrect temperature, incorrect relative humidity, and the presence of water. Fumigation and pesticides may also be damaging to certain materials and requires careful consideration. Conservation science has aided in the development of thermal control methods to eradicate pests.

Pollutants

Pollutants consist of a wide range of compounds that can have detrimental chemical reactions with objects. Pollutants can be gases, aerosols, liquids, or solids and are able to reach objects from transference from other objects, dissipation in the air, or intrinsically as part of the object's makeup. They all have the potential to cause adverse reactions with material culture. Conservation science aids in identifying both material and pollutant properties and the types of reactions that will occur. Reactions range from discoloration and stains, to acidification and structural weakening. Dust is one of the most common airborne pollutants and its presence can attract pests as well as alter the object's surface. Research in the field informs conservators on how to properly manage damage that occurs as well as means to monitor and control pollutant levels.

Physical forces

Physical forces are any interaction with an object that changes its current state of motion. Physical forces can cause a range of damage from small cracks and fissures to complete destruction or disintegration of material. The level of damage is dependent on the fragility, brittleness, or hardness of object's material and the magnitude of the force being inflicted. Impact, shock, vibration, pressure, and abrasion are a few examples of physical forces that can have adverse effects on material culture. Physical forces can occur from natural disasters like earthquakes, working forces like handling, cumulative forces like gravity, or low-level forces like building vibrations. During an object's risk assessment, the material properties of the object will inform the necessary steps (i.e. building, housing, and handling) that need to take place to mitigate the effects of physical forces.

Theft and vandalism

Theft, the removal of an asset, and vandalism, the deliberate destruction or disfigurement of an asset, are directly controlled and limited by the security measures put in place at a cultural institution. Conservation science can aid in the authentication or identification of stolen objects. In addition, the research of the field can help inform decisions as to the best course of action repair, minimize, or mitigate damage from vandalism.

Dissociation

Dissociation is the loss of an object, its associated data, or its value due to outside influence. Adherence to proper policies and procedures is the best defense against dissociation and as such, meticulous record keeping is the basis for all good practice. Conservation science aids in the authentication or identification of misplaced objects and detailed records of all past, present, and future study is necessary for the prevention of dissociation.

Methods

Optical microscope used to visually study very small paint fragments (mounted in epoxy) as a means of identifying paints used by artists.
 
There are a variety of methods used by conservation scientists to support work in the fields of art conservation, architectural conservation, cultural heritage, and care of cultural objects in museums and other collections. In addition to the use of specialized equipment, visual inspections are often the first step in order to look for obvious signs of damage, decay, infilling, etc.

Prior to any type of scientific analysis, detailed documentation of the initial state of the object and justification for all proposed examinations is required to avoid unnecessary or potentially damaging study and keep the amount of handling to a minimum.  Processes such as stereomicroscopy can reveal surface features such as the weave of parchment paper, whether a print was done in relief or in intaglio, and even what kind of tools an artist may have used to create their works. While there are many different specialized and generic tools used for conservation science studies, some of the most common are listed below.

Scientific equipment 

  • X-ray fluorescence spectroscopy (XRF) of the wooden, painted portrait of a Roman portrait mummy. The portable tool is hooked up to a rig that allows it to pan left and right, up and down, so as to scan the entire surface of the portrait. The height can also be manually adjusted to ensure focus is maintained. This technique provides information on the paints used which aids in provenance and compositional studies.
    X-Ray Fluorescence Spectroscopy (XRF)
    • Can identify elements both on the surface and sub-surface by performing x-ray scans over the entirety of an artwork
    • Non-destructive/non-invasive method - scans of the object's surface do not require sampling or removal of material
  • Computerized Tomography Scanning (CT Scan) and Magnetic Resonance Imaging (MRI)
    • Non-destructive way to image larger objects
    • Can reveal sub-surface structure as well as some composition information
    • Particularly useful for imaging artifacts such as mummified remains to aid in identification and understanding of burial practices
  • Reflectance Transformation Imaging (RTI)
    • Method of surface imaging whereby the location of the light source can be changed to image so an object or artwork is illuminated from a variety of directions
    • Non-invasive method that yields surface topography and texture to analyze surface features
  • Fourier Transform Infrared Spectroscopy (FTIR)
    • Method for identifying materials in works of art based on the fact that each compound or element has a specific combination of atoms, each of which will have a unique peak in the resultant spectra
    • Non-invasive and non-destructive method for chemical analysis that requires very small quantities of sample from inconspicuous locations on artworks and objects
The type of material present will be the deciding factor in what method will be appropriate for study. For example, organic materials are likely to be destroyed if exposed to too much radiation, a concern when doing X-ray and electron-based imaging. Conservation scientists may specialize with specific materials and work closely with conservators and curators in order to determine appropriate analysis and treatment methods.

  • Scanning Electron Microscopy (SEM)
    • Able to take high resolution and high magnification micrographs to study structural and surface features
    • Also may involve using Energy Dispersive X-Ray Spectroscopy (EDS) to identify specific elements or compounds present in the object
    • Electron Backscatter Diffraction (EBSD) can provide better contrast within the microscope in order to better visualize different phases, materials, and compounds present to identify composition
    • Can help to determine paint composition (specific type of paint used) in art works and compounds that may aid in provenance queries
    • Allows scientists to analyze whether the object's appearance merits preservation or if there are products of deterioration and decay that ought to be removed or cleaned prior to preservation
    • Destructive/invasive method - requires obtaining a sample from an object or artwork and exposing it to X-Ray radiation
  • Recursion (computer science)

    From Wikipedia, the free encyclopedia

    Tree created using the Logo programming language and relying heavily on recursion. Each branch can be seen as a smaller version of a tree.

    In computer science, recursion is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem. Such problems can generally be solved by iteration, but this needs to identify and index the smaller instances at programming time. At the opposite, recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.
    The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions.
    — Niklaus Wirth, Algorithms + Data Structures = Programs, 1976
    Most computer programming languages support recursion by allowing a function to call itself from within its own code. Some functional programming languages do not define any looping constructs but rely solely on recursion to repeatedly call code. It is proved in computability theory that these recursive-only languages are Turing complete; this means that they are as powerful (they can be used to solve the same problems) as imperative languages based on control structures such as while and for.

    Repeatedly calling a function from within itself may cause the call stack to have a size equal to the sum of the input sizes of all involved calls. It follows that, for problems that can be solved easily by iteration, recursion is generally less efficient, and, for large problems, it is fundamental to use optimization techniques such as tail call optimization.

    Recursive functions and algorithms

    A common computer programming tactic is to divide a problem into sub-problems of the same type as the original, solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; when combined with a lookup table that stores the results of solving sub-problems (to avoid solving them repeatedly and incurring extra computation time), it can be referred to as dynamic programming or memoization.

    A recursive function definition has one or more base cases, meaning input(s) for which the function produces a result trivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (calls itself). For example, the factorial function can be defined recursively by the equations 0! = 1 and, for all n > 0, n! = n(n − 1)!. Neither equation by itself constitutes a complete definition; the first is the base case, and the second is the recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the "terminating case".

    The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designed recursive function, with each recursive call, the input problem must be simplified in such a way that eventually the base case must be reached. (Functions that are not intended to terminate under normal circumstances—for example, some system and server processes—are an exception to this.) Neglecting to write a base case, or testing for it incorrectly, can cause an infinite loop

    For some functions (such as one that computes the series for e = 1/0! + 1/1! + 1/2! + 1/3! + ...) there is not an obvious base case implied by the input data; for these one may add a parameter (such as the number of terms to be added, in our series example) to provide a 'stopping criterion' that establishes the base case. Such an example is more naturally treated by co-recursion, where successive terms in the output are the partial sums; this can be converted to a recursion by using the indexing parameter to say "compute the nth term (nth partial sum)".

    Recursive data types

    Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is one technique for representing data whose exact size the programmer does not know: the programmer can specify this data with a self-referential definition. There are two types of self-referential definitions: inductive and coinductive definitions.

    Inductively defined data

    An inductively defined recursive data definition is one that specifies how to construct instances of the data. For example, linked lists can be defined inductively (here, using Haskell syntax):

    data ListOfStrings = EmptyList | Cons String ListOfStrings
    

    The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings. The self-reference in the definition permits the construction of lists of any (finite) number of strings. 

    Another example of inductive definition is the natural numbers (or positive integers):

    A natural number is either 1 or n+1, where n is a natural number.

    Similarly recursive definitions are often used to model the structure of expressions and statements in programming languages. Language designers often express grammars in a syntax such as Backus–Naur form; here is such a grammar, for a simple language of arithmetic expressions with multiplication and addition:

     <expr> ::= <number>
              | (<expr> * <expr>)
              | (<expr> + <expr>)
    

    This says that an expression is either a number, a product of two expressions, or a sum of two expressions. By recursively referring to expressions in the second and third lines, the grammar permits arbitrarily complex arithmetic expressions such as (5 * ((3 * 6) + 8)), with more than one product or sum operation in a single expression.

    Coinductively defined data and corecursion

    A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically, self-referential coinductive definitions are used for data structures of infinite size.

    A coinductive definition of infinite streams of strings, given informally, might look like this: 

    A stream of strings is an object s such that:
     head(s) is a string, and
     tail(s) is a stream of strings.
    

    This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how to access the contents of the data structure—namely, via the accessor functions head and tail—and what those contents may be, whereas the inductive definition specifies how to create the structure and what it may be created from. 

    Corecursion is related to coinduction, and can be used to compute particular instances of (possibly) infinite objects. As a programming technique, it is used most often in the context of lazy programming languages, and can be preferable to recursion when the desired size or precision of a program's output is unknown. In such cases the program requires both a definition for an infinitely large (or infinitely precise) result, and a mechanism for taking a finite portion of that result. The problem of computing the first n prime numbers is one that can be solved with a corecursive program (e.g. here).

    Types of recursion

    Single recursion and multiple recursion

    Recursion that only contains a single self-reference is known as single recursion, while recursion that contains multiple self-references is known as multiple recursion. Standard examples of single recursion include list traversal, such as in a linear search, or computing the factorial function, while standard examples of multiple recursion include tree traversal, such as in a depth-first search. 

    Single recursion is often much more efficient than multiple recursion, and can generally be replaced by an iterative computation, running in linear time and requiring constant space. Multiple recursion, by contrast, may require exponential time and space, and is more fundamentally recursive, not being able to be replaced by iteration without an explicit stack.

    Multiple recursion can sometimes be converted to single recursion (and, if desired, thence to iteration). For example, while computing the Fibonacci sequence naively is multiple iteration, as each value requires two previous values, it can be computed by single recursion by passing two successive values as parameters. This is more naturally framed as corecursion, building up from the initial values, tracking at each step two successive values – see corecursion: examples. A more sophisticated example is using a threaded binary tree, which allows iterative tree traversal, rather than multiple recursion.

    Indirect recursion

    Most basic examples of recursion, and most of the examples presented here, demonstrate direct recursion, in which a function calls itself. Indirect recursion occurs when a function is called not by itself but by another function that it called (either directly or indirectly). For example, if f calls f, that is direct recursion, but if f calls g which calls f, then that is indirect recursion of f. Chains of three or more functions are possible; for example, function 1 calls function 2, function 2 calls function 3, and function 3 calls function 1 again.

    Indirect recursion is also called mutual recursion, which is a more symmetric term, though this is simply a difference of emphasis, not a different notion. That is, if f calls g and then g calls f, which in turn calls g again, from the point of view of f alone, f is indirectly recursing, while from the point of view of g alone, it is indirectly recursing, while from the point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functions that call each other can be called a set of mutually recursive functions.

    Anonymous recursion

    Recursion is usually done by explicitly calling a function by name. However, recursion can also be done via implicitly calling a function based on the current context, which is particularly useful for anonymous functions, and is known as anonymous recursion.

    Structural versus generative recursion

    Some authors classify recursion as either "structural" or "generative". The distinction is related to where a recursive procedure gets the data that it works on, and how it processes that data:
    [Functions that consume structured data] typically decompose their arguments into their immediate structural components and then process those components. If one of the immediate components belongs to the same class of data as the input, the function is recursive. For that reason, we refer to these functions as (STRUCTURALLY) RECURSIVE FUNCTIONS.
    — Felleisen, Findler, Flatt, and Krishnaurthi, How to Design Programs, 2001
    Thus, the defining characteristic of a structurally recursive function is that the argument to each recursive call is the content of a field of the original input. Structural recursion includes nearly all tree traversals, including XML processing, binary tree creation and search, etc. By considering the algebraic structure of the natural numbers (that is, a natural number is either zero or the successor of a natural number), functions such as factorial may also be regarded as structural recursion.

    Generative recursion is the alternative:
    Many well-known recursive algorithms generate an entirely new piece of data from the given data and recur on it. HtDP (How to Design Programs) refers to this kind as generative recursion. Examples of generative recursion include: gcd, quicksort, binary search, mergesort, Newton's method, fractals, and adaptive integration.
    — Matthias Felleisen, Advanced Functional Programming, 2002
    This distinction is important in proving termination of a function.
    • All structurally recursive functions on finite (inductively defined) data structures can easily be shown to terminate, via structural induction: intuitively, each recursive call receives a smaller piece of input data, until a base case is reached.
    • Generatively recursive functions, in contrast, do not necessarily feed smaller input to their recursive calls, so proof of their termination is not necessarily as simple, and avoiding infinite loops requires greater care. These generatively recursive functions can often be interpreted as corecursive functions – each step generates the new data, such as successive approximation in Newton's method – and terminating this corecursion requires that the data eventually satisfy some condition, which is not necessarily guaranteed.
    • In terms of loop variants, structural recursion is when there is an obvious loop variant, namely size or complexity, which starts off finite and decreases at each recursive step.
    • By contrast, generative recursion is when there is not such an obvious loop variant, and termination depends on a function, such as "error of approximation" that does not necessarily decrease to zero, and thus termination is not guaranteed without further analysis.

    Recursive programs

    Recursive procedures

    Factorial

    A classic example of a recursive procedure is the function used to calculate the factorial of a natural number:
    Pseudocode (recursive):
    function factorial is:
    
    input: integer n such that n >= 0
    
    output: [n × (n-1) × (n-2) × … × 1]
    
    
        1. if n is 0, return 1
        2. otherwise, return [ n × factorial(n-1) ]
    
    
    end factorial
    

    The function can also be written as a recurrence relation:
    This evaluation of the recurrence relation demonstrates the computation that would be performed in evaluating the pseudocode above: 

    Computing the recurrence relation for n = 4:
    b4           = 4 * b3
    
                 = 4 * (3 * b2)
                 = 4 * (3 * (2 * b1))
                 = 4 * (3 * (2 * (1 * b0)))
                 = 4 * (3 * (2 * (1 * 1)))
                 = 4 * (3 * (2 * 1))
                 = 4 * (3 * 2)
                 = 4 * 6
                 = 24
    

    This factorial function can also be described without using recursion by making use of the typical looping constructs found in imperative programming languages:

    Pseudocode (iterative):
     
    function factorial is:
    
    input: integer n such that n >= 0
    
    output: [n × (n-1) × (n-2) × … × 1]
    
    
        1. create new variable called running_total with a value of 1
    
    
        2. begin loop
              1. if n is 0, exit loop
              2. set running_total to (running_total × n)
              3. decrement n
              4. repeat loop
    
    
        3. return running_total
    
    
    end factorial
    

    The imperative code above is equivalent to this mathematical definition using an accumulator variable t:
    The definition above translates straightforwardly to functional programming languages such as Scheme; this is an example of iteration implemented recursively.

    Greatest common divisor

    The Euclidean algorithm, which computes the greatest common divisor of two integers, can be written recursively. 

    Function definition:
    Pseudocode (recursive): 
     
    function gcd is:
    input: integer x, integer y such that x > 0 and y >= 0
    
    
        1. if y is 0, return x
        2. otherwise, return [ gcd( y, (remainder of x/y) ) ]
    
    
    end gcd
    

    Recurrence relation for greatest common divisor, where expresses the remainder of :
    if
    Computing the recurrence relation for x = 27 and y = 9:
    gcd(27, 9)   = gcd(9, 27% 9)
                 = gcd(9, 0)
                 = 9
    
    Computing the recurrence relation for x = 111 and y = 259:
    gcd(111, 259)   = gcd(259, 111% 259)
                    = gcd(259, 111)
                    = gcd(111, 259% 111)
                    = gcd(111, 37)
                    = gcd(37, 111% 37)
                    = gcd(37, 0)
                    = 37
    
    The recursive program above is tail-recursive; it is equivalent to an iterative algorithm, and the computation shown above shows the steps of evaluation that would be performed by a language that eliminates tail calls. Below is a version of the same algorithm using explicit iteration, suitable for a language that does not eliminate tail calls. By maintaining its state entirely in the variables x and y and using a looping construct, the program avoids making recursive calls and growing the call stack.

    Pseudocode (iterative):
     
    function gcd is:
    
    input: integer x, integer y such that x >= y and y >= 0
    
    
        1. create new variable called remainder
    
    
        2. begin loop
              1. if y is zero, exit loop
              2. set remainder to the remainder of x/y
              3. set x to y
              4. set y to remainder
              5. repeat loop
    
    
        3. return x
    
    
    end gcd
    

    The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is more difficult to understand the process by simple inspection, although the two algorithms are very similar in their steps.

    Towers of Hanoi

    Towers of Hanoi
    The Towers of Hanoi is a mathematical puzzle whose solution illustrates recursion. There are three pegs which can hold stacks of disks of different diameters. A larger disk may never be stacked on top of a smaller. Starting with n disks on one peg, they must be moved to another peg one at a time. What is the smallest number of steps to move the stack?
    Function definition:
    Recurrence relation for hanoi:
    Computing the recurrence relation for n = 4:
    hanoi(4)     = 2*hanoi(3) + 1
                 = 2*(2*hanoi(2) + 1) + 1
                 = 2*(2*(2*hanoi(1) + 1) + 1) + 1
                 = 2*(2*(2*1 + 1) + 1) + 1
                 = 2*(2*(3) + 1) + 1
                 = 2*(7) + 1
                 = 15
    

    Example implementations:

    Pseudocode (recursive): 
     
    function hanoi is:
    
    input: integer n, such that n >= 1
    
    
        1. if n is 1 then return 1
    
    
        2. return [2 * [call hanoi(n-1)] + 1]
    
    
    end hanoi
    

    Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to an explicit formula.

    An explicit formula for Towers of Hanoi:
     
    h1 = 1   = 21 - 1
    h2 = 3   = 22 - 1
    h3 = 7   = 23 - 1
    h4 = 15  = 24 - 1
    h5 = 31  = 25 - 1
    h6 = 63  = 26 - 1
    h7 = 127 = 27 - 1
    
    In general:
    hn = 2n - 1, for all n >= 1
    

    Binary search

    The binary search algorithm is a method of searching a sorted array for a single element by cutting the array in half with each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that point with the data being searched and then responding to one of three possible conditions: the data is found at the midpoint, the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the data being searched for.

    Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. The binary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array's size is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growth because it essentially divides the problem domain in half with each pass.

    Example implementation of binary search in C: 

     /*
      Call binary_search with proper initial conditions.
    
      INPUT:
        data is an array of integers SORTED in ASCENDING order,
        toFind is the integer to search for,
        count is the total number of elements in the array
    
      OUTPUT:
        result of binary_search
    
     */
     int search(int *data, int toFind, int count)
     {
        //  Start = 0 (beginning index)
        //  End = count - 1 (top index)
        return binary_search(data, toFind, 0, count-1);
     }
    
     /*
       Binary Search Algorithm.
    
       INPUT:
            data is a array of integers SORTED in ASCENDING order,
            toFind is the integer to search for,
            start is the minimum array index,
            end is the maximum array index
       OUTPUT:
            position of the integer toFind within array data,
            -1 if not found
     */
     int binary_search(int *data, int toFind, int start, int end)
     {
        //Get the midpoint.
        int mid = start + (end - start)/2;   //Integer division
    
        //Stop condition.
        if (start > end)
           return -1;
        else if (data[mid] == toFind)        //Found?
           return mid;
        else if (data[mid] > toFind)//Data is greater than toFind, search lower half
           return binary_search(data, toFind, start, mid-1);
        else //Data is less than toFind, search upper half
           return binary_search(data, toFind, mid+1, end);
     }
    

    Recursive data structures (structural recursion)

    An important application of recursion in computer science is in defining dynamic data structures such as lists and trees. Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; in contrast, the size of a static array must be set at compile time.
    "Recursive algorithms are particularly appropriate when the underlying problem or the data to be treated are defined in recursive terms."
    The examples in this section illustrate what is known as "structural recursion". This term refers to the fact that the recursive procedures are acting on data that is defined recursively.
    As long as a programmer derives the template from a data definition, functions employ structural recursion. That is, the recursions in a function's body consume some immediate piece of a given compound value.

    Linked lists

    Below is a C definition of a linked list node structure. Notice especially how the node is defined in terms of itself. The "next" element of struct node is a pointer to another struct node, effectively creating a list type.

    struct node
    {
      int data;           // some integer data
      struct node *next;  // pointer to another struct node
    };
    

    Because the struct node data structure is defined recursively, procedures that operate on it can be implemented naturally as recursive procedures. The list_print procedure defined below walks down the list until the list is empty (i.e., the list pointer has a value of NULL). For each node it prints the data element (an integer). In the C implementation, the list remains unchanged by the list_print procedure.

    void list_print(struct node *list)
    {
        if (list != NULL)               
        {
           printf ("%d ", list->data);  
           list_print (list->next);     
        }
    }
    

    Binary trees

    Below is a simple definition for a binary tree node. Like the node for linked lists, it is defined in terms of itself, recursively. There are two self-referential pointers: left (pointing to the left sub-tree) and right (pointing to the right sub-tree). 

    struct node
    {
      int data;            // some integer data
      struct node *left;   // pointer to the left subtree
      struct node *right;  // point to the right subtree
    };
    

    Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers (left and right), tree operations may require two recursive calls:
     
    // Test if tree_node contains i; return 1 if so, 0 if not.
    int tree_contains(struct node *tree_node, int i) {
        if (tree_node == NULL)
            return 0;  // base case
        else if (tree_node->data == i)
            return 1;
        else
            return tree_contains(tree_node->left, i) || tree_contains(tree_node->right, i);
    } 
     
    At most two recursive calls will be made for any given call to tree_contains as defined above.
     
    // Inorder traversal:
    void tree_print(struct node *tree_node) {
            if (tree_node != NULL) {                  
                    tree_print(tree_node->left);      
                    printf("%d ", tree_node->data);   
                    tree_print(tree_node->right);     
            }
    }
    
    The above example illustrates an in-order traversal of the binary tree. A Binary search tree is a special case of the binary tree where the data elements of each node are in order.

    Filesystem traversal

    Since the number of files in a filesystem may vary, recursion is the only practical way to traverse and thus enumerate its contents. Traversing a filesystem is very similar to that of tree traversal, therefore the concepts behind tree traversal are applicable to traversing a filesystem. More specifically, the code below would be an example of a preorder traversal of a filesystem.

    import java.io.*;
    
    public class FileSystem {
    
     public static void main (String [] args) {
      traverse ();
     }
    
     /**
      * Obtains the filesystem roots
      * Proceeds with the recursive filesystem traversal
      */
     private static void traverse () {
      File [] fs = File.listRoots ();
      for (int i = 0; i < fs.length; i++) {
       if (fs[i].isDirectory () && fs[i].canRead ()) {
        rtraverse (fs[i]);
       }
      }
     }
    
     /**
      * Recursively traverse a given directory
      *
      * @param fd indicates the starting point of traversal
      */
     private static void rtraverse (File fd) {
      File [] fss = fd.listFiles ();
    
      for (int i = 0; i < fss.length; i++) {
       System.out.println (fss[i]);
       if (fss[i].isDirectory () && fss[i].canRead ()) {
        rtraverse (fss[i]);
       }
      }
     }
    
    }
    

    This code blends the lines, at least somewhat, between recursion and iteration. It is, essentially, a recursive implementation, which is the best way to traverse a filesystem. It is also an example of direct and indirect recursion. The method "rtraverse" is purely a direct example; the method "traverse" is the indirect, which calls "rtraverse". This example needs no "base case" scenario due to the fact that there will always be some fixed number of files or directories in a given filesystem.

    Implementation issues

    In actual implementation, rather than a pure recursive function (single check for base case, otherwise recursive step), a number of modifications may be made, for purposes of clarity or efficiency. These include:
    • Wrapper function (at top)
    • Short-circuiting the base case, aka "Arm's-length recursion" (at bottom)
    • Hybrid algorithm (at bottom) – switching to a different algorithm once data is small enough
    On the basis of elegance, wrapper functions are generally approved, while short-circuiting the base case is frowned upon, particularly in academia. Hybrid algorithms are often used for efficiency, to reduce the overhead of recursion in small cases, and arm's-length recursion is a special case of this.

    Wrapper function

    A wrapper function is a function that is directly called but does not recurse itself, instead calling a separate auxiliary function which actually does the recursion.

    Wrapper functions can be used to validate parameters (so the recursive function can skip these), perform initialization (allocate memory, initialize variables), particularly for auxiliary variables such as "level of recursion" or partial computations for memoization, and handle exceptions and errors. In languages that support nested functions, the auxiliary function can be nested inside the wrapper function and use a shared scope. In the absence of nested functions, auxiliary functions are instead a separate function, if possible private (as they are not called directly), and information is shared with the wrapper function by using pass-by-reference.

    Short-circuiting the base case

    Short-circuiting the base case, also known as arm's-length recursion, consists of checking the base case before making a recursive call – i.e., checking if the next call will be the base case, instead of calling and then checking for the base case. Short-circuiting is particularly done for efficiency reasons, to avoid the overhead of a function call that immediately returns. Note that since the base case has already been checked for (immediately before the recursive step), it does not need to be checked for separately, but one does need to use a wrapper function for the case when the overall recursion starts with the base case itself. For example, in the factorial function, properly the base case is 0! = 1, while immediately returning 1 for 1! is a short-circuit, and may miss 0; this can be mitigated by a wrapper function. 

    Short-circuiting is primarily a concern when many base cases are encountered, such as Null pointers in a tree, which can be linear in the number of function calls, hence significant savings for O(n) algorithms; this is illustrated below for a depth-first search. Short-circuiting on a tree corresponds to considering a leaf (non-empty node with no children) as the base case, rather than considering an empty node as the base case. If there is only a single base case, such as in computing the factorial, short-circuiting provides only O(1) savings. 

    Conceptually, short-circuiting can be considered to either have the same base case and recursive step, only checking the base case before the recursion, or it can be considered to have a different base case (one step removed from standard base case) and a more complex recursive step, namely "check valid then recurse", as in considering leaf nodes rather than Null nodes as base cases in a tree. Because short-circuiting has a more complicated flow, compared with the clear separation of base case and recursive step in standard recursion, it is often considered poor style, particularly in academia.

    Depth-first search

    A basic example of short-circuiting is given in depth-first search (DFS) of a binary tree; see binary trees section for standard recursive discussion.

    The standard recursive algorithm for a DFS is:
    • base case: If current node is Null, return false
    • recursive step: otherwise, check value of current node, return true if match, otherwise recurse on children
    In short-circuiting, this is instead:
    • check value of current node, return true if match,
    • otherwise, on children, if not Null, then recurse.
    In terms of the standard steps, this moves the base case check before the recursive step. Alternatively, these can be considered a different form of base case and recursive step, respectively. Note that this requires a wrapper function to handle the case when the tree itself is empty (root node is Null).

    In the case of a perfect binary tree of height h, there are 2h+1−1 nodes and 2h+1 Null pointers as children (2 for each of the 2h leaves), so short-circuiting cuts the number of function calls in half in the worst case.

    In C, the standard recursive algorithm may be implemented as: 

    bool tree_contains(struct node *tree_node, int i) {
        if (tree_node == NULL)
            return false;  // base case
        else if (tree_node->data == i)
            return true;
        else
            return tree_contains(tree_node->left, i) ||
                   tree_contains(tree_node->right, i);
    }
    
    The short-circuited algorithm may be implemented as:
    // Wrapper function to handle empty tree
    bool tree_contains(struct node *tree_node, int i) {
        if (tree_node == NULL)
            return false;  // empty tree
        else
            return tree_contains_do(tree_node, i); 
    }
    
    // Assumes tree_node != NULL
    bool tree_contains_do(struct node *tree_node, int i) {
        if (tree_node->data == i)
            return true;  // found
        else  // recurse
            return (tree_node->left  && tree_contains_do(tree_node->left,  i)) ||
                   (tree_node->right && tree_contains_do(tree_node->right, i));
    }
    

    Note the use of short-circuit evaluation of the Boolean && (AND) operators, so that the recursive call is only made if the node is valid (non-Null). Note that while the first term in the AND is a pointer to a node, the second term is a bool, so the overall expression evaluates to a bool. This is a common idiom in recursive short-circuiting. This is in addition to the short-circuit evaluation of the Boolean || (OR) operator, to only check the right child if the left child fails. In fact, the entire control flow of these functions can be replaced with a single Boolean expression in a return statement, but legibility suffers at no benefit to efficiency.

    Hybrid algorithm

    Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. For this reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch to a different algorithm when the input becomes small. An important example is merge sort, which is often implemented by switching to the non-recursive insertion sort when the data is sufficiently small, as in the tiled merge sort. Hybrid recursive algorithms can often be further refined, as in Timsort, derived from a hybrid merge sort/insertion sort.

    Recursion versus iteration

    Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit call stack, while iteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consideration and the language used. In imperative programming, iteration is preferred, particularly for simple recursion, as it avoids the overhead of function calls and call stack management, but recursion is generally used for multiple recursion. By contrast, in functional languages recursion is preferred, with tail recursion optimization leading to little overhead. Implementing an algorithm using iteration may not be easily achievable.

    Compare the templates to compute xn defined by xn = f(n, xn-1) from xbase:
    function recursive(n)
        if n == base
            return xbase
        else
            return f(n, recursive(n-1)) 
    
    function iterative(n)
        x = xbase
        for i = n downto base
            x = f(i, x)
        return x
    

    For imperative language the overhead is to define the function, for functional language the overhead is to define the accumulator variable x.

    For example, a factorial function may be implemented iteratively in C by assigning to an loop index variable and accumulator variable, rather than by passing arguments and returning values by recursion: 

    unsigned int factorial(unsigned int n) {
      unsigned int product = 1; // empty product is 1
      while (n) {
        product *= n;
        --n;
      }
      return product;
    }
    

    Expressive power

    Most programming languages in use today allow the direct specification of recursive functions and procedures. When such a function is called, the program's runtime environment keeps track of the various instances of the function (often using a call stack, although other methods may be used). Every recursive function can be transformed into an iterative function by replacing recursive calls with iterative control constructs and simulating the call stack with a stack explicitly managed by the program.

    Conversely, all iterative functions and procedures that can be evaluated by a computer (see Turing completeness) can be expressed in terms of recursive functions; iterative control constructs such as while loops and for loops are routinely rewritten in recursive form in functional languages. However, in practice this rewriting depends on tail call elimination, which is not a feature of all languages. C, Java, and Python are notable mainstream languages in which all function calls, including tail calls, may cause stack allocation that would not occur with the use of looping constructs; in these languages, a working iterative program rewritten in recursive form may overflow the call stack, although tail call elimination may be a feature that is not covered by a language's specification, and different implementations of the same language may differ in tail call elimination capabilities.

    Performance issues

    In languages (such as C and Java) that favor iterative looping constructs, there is usually significant time and space cost associated with recursive programs, due to the overhead required to manage the stack and the relative slowness of function calls; in functional languages, a function call (particularly a tail call) is typically a very fast operation, and the difference is usually less noticeable. 

    As a concrete example, the difference in performance between recursive and iterative implementations of the "factorial" example above depends highly on the compiler used. In languages where looping constructs are preferred, the iterative version may be as much as several orders of magnitude faster than the recursive one. In functional languages, the overall time difference of the two implementations may be negligible; in fact, the cost of multiplying the larger numbers first rather than the smaller numbers (which the iterative version given here happens to do) may overwhelm any time saved by choosing iteration.

    Stack space

    In some programming languages, the maximum size of the call stack is much less than the space available in the heap, and recursive algorithms tend to require more stack space than iterative algorithms. Consequently, these languages sometimes place a limit on the depth of recursion to avoid stack overflows; Python is one such language. Note the caveat below regarding the special case of tail recursion.

    Vulnerability

    Because recursive algorithms can be subject to stack overflows, they may be vulnerable to pathological or malicious input. Some malware specifically targets a program's call stack and takes advantage of the stack's inherently recursive nature. Even in the absence of malware, a stack overflow caused by unbounded recursion can be fatal to the program, and exception handling logic may not prevent the corresponding process from being terminated.

    Multiply recursive problems

    Multiply recursive problems are inherently recursive, because of prior state they need to track. One example is tree traversal as in depth-first search; though both recursive and iterative methods are used, they contrast with list traversal and linear search in a list, which is a singly recursive and thus naturally iterative method. Other examples include divide-and-conquer algorithms such as Quicksort, and functions such as the Ackermann function. All of these algorithms can be implemented iteratively with the help of an explicit stack, but the programmer effort involved in managing the stack, and the complexity of the resulting program, arguably outweigh any advantages of the iterative solution.

    Refactoring recursion

    Recursive algorithms can be replaced with non-recursive counterparts. One method for replacing recursive algorithms is to simulate them using heap memory in place of stack memory. An alternative is to develop a replacement algorithm entirely based on non-recursive methods, which can be challenging. For example, recursive algorithms for matching wildcards, such as Rich Salz' wildmat algorithm, were once typical. Non-recursive algorithms for the same purpose, such as the Krauss matching wildcards algorithm, have been developed to avoid the drawbacks of recursion and have improved only gradually based on techniques such as collecting tests and profiling performance.

    Tail-recursive functions

    Tail-recursive functions are functions in which all recursive calls are tail calls and hence do not build up any deferred operations. For example, the gcd function (shown again below) is tail-recursive. In contrast, the factorial function (also below) is not tail-recursive; because its recursive call is not in tail position, it builds up deferred multiplication operations that must be performed after the final recursive call completes. With a compiler or interpreter that treats tail-recursive calls as jumps rather than function calls, a tail-recursive function such as gcd will execute using constant space. Thus the program is essentially iterative, equivalent to using imperative language control structures like the "for" and "while" loops.

    Tail recursion: Augmenting recursion:
    //INPUT: Integers x, y such that x >= y and y >= 0
    int gcd(int x, int y)
    {
      if (y == 0)
         return x;
      else
         return gcd(y, x % y);
    }
    
    //INPUT: n is an Integer such that n >= 0
    int fact(int n)
    {
       if (n == 0)
          return 1;
       else
          return n * fact(n - 1);
    }
    
    The significance of tail recursion is that when making a tail-recursive call (or any tail call), the caller's return position need not be saved on the call stack; when the recursive call returns, it will branch directly on the previously saved return position. Therefore, in languages that recognize this property of tail calls, tail recursion saves both space and time.

    Order of execution

    In the simple case of a function calling itself only once, instructions placed before the recursive call are executed once per recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly after the maximum recursion has been reached. Consider this example:

    Function 1

    void recursiveFunction(int num) {
       printf("%d\n", num);
       if (num < 4)
          recursiveFunction(num + 1);
    }
    
    Recursive1.svg

    Function 2 with swapped lines

    void recursiveFunction(int num) {
       if (num < 4)
          recursiveFunction(num + 1);
       printf("%d\n", num);
    }
    
    Recursive2.svg

    Time-efficiency of recursive algorithms

    The time efficiency of recursive algorithms can be expressed in a recurrence relation of Big O notation. They can (usually) then be simplified into a single Big-O term.

    Shortcut rule (master theorem)

    If the time-complexity of the function is in the form 


    Then the Big O of the time-complexity is thus:
    • If for some constant , then
    • If , then
    • If for some constant , and if for some constant c < 1 and all sufficiently large n, then
    where a represents the number of recursive calls at each level of recursion, b represents by what factor smaller the input is for the next level of recursion (i.e. the number of pieces you divide the problem into), and f (n) represents the work the function does independent of any recursion (e.g. partitioning, recombining) at each level of recursion.

    Neurophilosophy

    From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neurophilosophy ...