Frequency is the number of occurrences of a repeating event per unit of time. It is also occasionally referred to as temporal frequency to emphasize the contrast to spatial frequency, and ordinary frequency to emphasize the contrast to angular frequency. Frequency is expressed in units of hertz (Hz) which is equivalent to one (event) per second. The corresponding period is the time duration of one cycle in a repeating event, so the period is the reciprocal of the frequency. For example, if a heart beats at a frequency of 120 times a minute (2 hertz), its period, T—the time interval between beats—is half a second (60 seconds divided by 120 beats). Frequency is an important parameter used in science and engineering to specify the temporal rate of change observed in oscillatory and periodic phenomena, such as mechanical vibrations, audio signals (sound), radio waves, and light.
Definitions and units
A pendulum with a period of 2.8 s and a frequency of 0.36 Hz
For cyclical phenomena such as oscillations, waves, or for examples of simple harmonic motion, the term frequency is defined as the number of cycles or vibrations per unit of time. The conventional symbol for frequency is f; the Greek letter (nu) is also used. The period is the time taken to complete one cycle of an oscillation. The relation between the frequency and the period is given by the equation:
The term temporal frequency is used to emphasise that the
frequency is characterised by the number of occurrences of a repeating
event per unit time, and not unit distance.
The SI derived unit of frequency is the hertz (Hz), named after the German physicist Heinrich Hertz by the International Electrotechnical Commission in 1930. It was adopted by the CGPM (Conférence générale des poids et mesures) in 1960, officially replacing the previous name, "cycles per second" (cps). The SI unit for the period, as for all measurements of time, is the second. A traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. 60 rpm is equivalent to one hertz.
Wind-generated waves are described in terms of their period rather than frequency.
Period versus frequency
As a matter of convenience, longer and slower waves, such as ocean surface waves, tend to be described by wave period rather than frequency. Short and fast waves, like audio and radio, are usually described by their frequency instead of period. Some commonly used conversions are listed below:
Angular frequency is commonly measured in radians per second (rad/s) but, for discrete-time signals, can also be expressed as radians per sampling interval, which is a dimensionless quantity. Angular frequency (in rad/s) is larger than ordinary frequency (in Hz) by a factor of 2π.
Spatial frequency is analogous to temporal frequency, but the time axis is replaced by one or more spatial displacement axes, e.g.:
Wavenumber, k, is the spatial frequency analogue of angular temporal frequency and is measured in radians per metre. In the case of more than one spatial dimension, wavenumber is a vector quantity.
For periodic waves in nondispersive media (that is, media in which the wave speed is independent of frequency), frequency has an inverse relationship to the wavelength, λ (lambda). Even in dispersive media, the frequency f of a sinusoidal wave is equal to the phase velocityv of the wave divided by the wavelength λ of the wave:
In the special case of electromagnetic waves moving through a vacuum, then v = c, where c is the speed of light in a vacuum, and this expression becomes:
When monochromatic waves travel from one medium to another, their frequency remains the same—only their wavelength and speed change.
Measurement of frequency can be done in the following ways:
Counting
Calculating
the frequency of a repeating event is accomplished by counting the
number of times that event occurs within a specific time period, then
dividing the count by the length of the time period. For example, if 71
events occur within 15 seconds the frequency is:
If the number of counts is not very large, it is more accurate to
measure the time interval for a predetermined number of occurrences,
rather than the number of occurrences within a specified time. The latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called gating error and causes an average error in the calculated frequency of , or a fractional error of where is the timing interval and
is the measured frequency. This error decreases with frequency, so it
is generally a problem at low frequencies where the number of counts N is small.
A
resonant-reed frequency meter, an obsolete device used from about 1900
to the 1940s for measuring the frequency of alternating current. It
consists of a strip of metal with reeds of graduated lengths, vibrated
by an electromagnet. When the unknown frequency is applied to the electromagnet, the reed which is resonant at that frequency will vibrate with large amplitude, visible next to the scale.
Stroboscope
An old method of measuring the frequency of rotating or vibrating objects is to use a stroboscope. This is an intense repetitively flashing light (strobe light)
whose frequency can be adjusted with a calibrated timing circuit. The
strobe light is pointed at the rotating object and the frequency
adjusted up and down. When the frequency of the strobe equals the
frequency of the rotating or vibrating object, the object completes one
cycle of oscillation and returns to its original position between the
flashes of light, so when illuminated by the strobe the object appears
stationary. Then the frequency can be read from the calibrated readout
on the stroboscope. A downside of this method is that an object rotating
at an integer multiple of the strobing frequency will also appear
stationary.
Higher frequencies are usually measured with a frequency counter. This is an electronic instrument which measures the frequency of an applied repetitive electronic signal and displays the result in hertz on a digital display. It uses digital logic to count the number of cycles during a time interval established by a precision quartz time base. Cyclic processes that are not electrical, such as the rotation rate of a shaft, mechanical vibrations, or sound waves, can be converted to a repetitive electronic signal by transducers
and the signal applied to a frequency counter. As of 2018, frequency
counters can cover the range up to about 100 GHz. This represents the
limit of direct counting methods; frequencies above this must be
measured by indirect methods.
Heterodyne methods
Above the range of frequency counters, frequencies of electromagnetic signals are often measured indirectly utilizing heterodyning (frequency conversion).
A reference signal of a known frequency near the unknown frequency is
mixed with the unknown frequency in a nonlinear mixing device such as a diode. This creates a heterodyne
or "beat" signal at the difference between the two frequencies. If the
two signals are close together in frequency the heterodyne is low
enough to be measured by a frequency counter. This process only measures
the difference between the unknown frequency and the reference
frequency. To reach higher frequencies, several stages of heterodyning
can be used. Current research is extending this method to infrared and
light frequencies (optical heterodyne detection).
Visible light is an electromagnetic wave, consisting of oscillating electric and magnetic fields traveling through space. The frequency of the wave determines its color: 400 THz (4×1014 Hz) is red light, 800 THz (8×1014 Hz) is violet light, and between these (in the range 400–800 THz) are all the other colors of the visible spectrum. An electromagnetic wave with a frequency less than 4×1014 Hz will be invisible to the human eye; such waves are called infrared (IR) radiation. At even lower frequency, the wave is called a microwave, and at still lower frequencies it is called a radio wave. Likewise, an electromagnetic wave with a frequency higher than 8×1014 Hz will also be invisible to the human eye; such waves are called ultraviolet (UV) radiation. Even higher-frequency waves are called X-rays, and higher still are gamma rays.
All of these waves, from the lowest-frequency radio waves to the
highest-frequency gamma rays, are fundamentally the same, and they are
all called electromagnetic radiation.
They all travel through a vacuum at the same speed (the speed of
light), giving them wavelengths inversely proportional to their
frequencies.
where c is the speed of light (c in a vacuum or less in other media), f is the frequency and λ is the wavelength.
In dispersive media, such as glass, the speed depends somewhat on frequency, so the wavelength is not quite inversely proportional to frequency.
The sound wave spectrum, with rough guide of some applications
Sound propagates as mechanical vibration waves of pressure and displacement, in air or other substances. In general, frequency components of a sound determine its "color", its timbre. When speaking about the frequency (in singular) of a sound, it means the property that most determines its pitch.
The frequencies an ear can hear are limited to a specific range of frequencies. The audible frequency
range for humans is typically given as being between about 20 Hz and
20,000 Hz (20 kHz), though the high frequency limit usually reduces with
age. Other species have different hearing ranges. For example, some dog breeds can perceive vibrations up to 60,000 Hz.
In many media, such as air, the speed of sound
is approximately independent of frequency, so the wavelength of the
sound waves (distance between repetitions) is approximately inversely
proportional to frequency.
In Europe, Africa, Australia, southern South America, most of Asia, and Russia, the frequency of the alternating current in household electrical outlets is 50 Hz (close to the tone G), whereas in North America
and northern South America, the frequency of the alternating current in
household electrical outlets is 60 Hz (between the tones B♭ and B; that
is, a minor third above the European frequency). The frequency of the 'hum' in an audio recording can show where the recording was made, in countries using a European, or an American, grid frequency.
In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.
For any program f that might determine if programs halt, a "pathological" program g, called with some input, can pass its own source and its input to f and then specifically do the opposite of what f predicts g will do. No f
can exist that handles this case. A key part of the proof is a
mathematical definition of a computer and program, which is known as a Turing machine; the halting problem is undecidable over Turing machines. It is one of the first cases of decision problems
proven to be unsolvable. This proof is significant to practical
computing efforts, defining a class of applications which no programming
invention can possibly perform perfectly.
Jack Copeland attributes the introduction of the term halting problem to the work of Martin Davis in the 1950s.
Background
The halting problem is a decision problem about properties of computer programs on a fixed Turing-complete model of computation, i.e., all programs that can be written in some given programming language
that is general enough to be equivalent to a Turing machine. The
problem is to determine, given a program and an input to the program,
whether the program will eventually halt when run with that input. In
this abstract framework, there are no resource limitations on the amount
of memory or time required for the program's execution; it can take
arbitrarily long and use an arbitrary amount of storage space before
halting. The question is simply whether the given program will ever halt
on a particular input.
While deciding whether these programs halt is simple, more
complex programs prove problematic. One approach to the problem might be
to run the program for some number of steps and check if it halts. But
if the program does not halt, it is unknown whether the program will
eventually halt or run forever. Turing proved no algorithm exists that
always correctly decides whether, for a given arbitrary program and
input, the program halts when run with that input. The essence of
Turing's proof is that any such algorithm can be made to produce
contradictory output and therefore cannot be correct.
Programming consequences
Some infinite loops can be quite useful. For instance, event loops are typically coded as infinite loops. However, most subroutines are intended to finish. In particular, in hard real-time computing,
programmers attempt to write subroutines that are not only guaranteed
to finish, but are also guaranteed to finish before a given deadline
Sometimes these programmers use some general-purpose (Turing-complete) programming language,
but attempt to write in a restricted style—such as MISRA C or SPARK—that makes it easy to prove that the resulting subroutines finish before the given deadline.
Other times these programmers apply the rule of least power—they
deliberately use a computer language that is not quite fully
Turing-complete. Frequently, these are languages that guarantee all
subroutines finish, such as Coq.
Common pitfalls
The
difficulty in the halting problem lies in the requirement that the
decision procedure must work for all programs and inputs. A particular
program either halts on a given input or does not halt. Consider one
algorithm that always answers "halts" and another that always answers
"does not halt". For any specific program and input, one of these two
algorithms answers correctly, even though nobody may know which one. Yet
neither algorithm solves the halting problem generally.
There are programs (interpreters)
that simulate the execution of whatever source code they are given.
Such programs can demonstrate that a program does halt if this is the
case: the interpreter itself will eventually halt its simulation, which
shows that the original program halted. However, an interpreter will not
halt if its input program does not halt, so this approach cannot solve
the halting problem as stated; it does not successfully answer "does not
halt" for programs that do not halt.
The halting problem is theoretically decidable for linear bounded automata
(LBAs) or deterministic machines with finite memory. A machine with
finite memory has a finite number of configurations, and thus any
deterministic program on it must eventually either halt or repeat a
previous configuration:
...any finite-state machine, if left completely to itself, will fall eventually into a perfectly periodic repetitive pattern. The duration of this repeating pattern cannot exceed the number of internal states of the machine...
However, a computer with a million small parts, each with two states, would have at least 21,000,000 possible states:
This is a 1 followed by about three
hundred thousand zeroes ... Even if such a machine were to operate at
the frequencies of cosmic rays, the aeons of galactic evolution would be
as nothing compared to the time of a journey through such a cycle:
Although a machine may be finite, and finite automata "have a number of theoretical limitations":
...the magnitudes involved should
lead one to suspect that theorems and arguments based chiefly on the
mere finiteness [of] the state diagram may not carry a great deal of
significance.
It can also be decided automatically whether a nondeterministic
machine with finite memory halts on none, some, or all of the possible
sequences of nondeterministic decisions, by enumerating states after
each possible decision.
The halting problem is historically important because it was one of the first problems to be proved undecidable. In April 1936, Alonzo Church published his proof of the undecidability of a problem in the lambda calculus. Turing's proof was published later, in January 1937. Since then, many other undecidable problems have been described.
1920 – 1921: Emil Post explores the halting problem for tag systems, regarding it as a candidate for unsolvability. Its unsolvability was not established until much later, by Marvin Minsky.
1928: Hilbert recasts his 'Second Problem' at the Bologna International Congress. He posed three questions: i.e. #1: Was mathematics complete? #2: Was mathematics consistent? #3: Was mathematics decidable? The third question is known as the Entscheidungsproblem (Decision Problem).
1930: Kurt Gödel announces a proof as an answer to the first two of Hilbert's 1928 questions.
"At first he [Hilbert] was only angry and frustrated, but then he began
to try to deal constructively with the problem... Gödel himself
felt—and expressed the thought in his paper—that his work did not
contradict Hilbert's formalistic point of view"
1931: Gödel publishes "On Formally Undecidable Propositions of Principia Mathematica and Related Systems I"
19 April 1935: Alonzo Church publishes "An Unsolvable Problem of Elementary Number Theory", which proposes that the intuitive notion of an effectively calculable function can be formalized by the general recursive functions or equivalently by the lambda-definable functions. He proves that the halting problem for lambda calculus (i.e., whether a given lambda-expression has a normal form) is not effectively calculable.
1936: Church publishes the first proof that the Entscheidungsproblem is unsolvable.
7 October 1936: Emil Post's
paper "Finite Combinatory Processes. Formulation I" is received. Post
adds to his "process" an instruction "(C) Stop". He called such a
process "type 1 ... if the process it determines terminates for each
specific problem."
May 1936 – January 1937: Alan Turing's paper On Computable Numbers With an Application to the Entscheidungsproblem went to press in May 1936 and reached print in January 1937. Turing's proof departs from calculation by recursive functions
and introduces the notion of computation by machine. This is one of
the "first examples of decision problems proved unsolvable".
1939: J. Barkley Rosser observes the essential equivalence of "effective method" defined by Gödel, Church, and Turing
1943: In a paper, Stephen Kleene
states that "In setting up a complete algorithmic theory, what we do is
describe a procedure ... which procedure necessarily terminates and in
such manner that from the outcome we can read a definite answer, 'Yes'
or 'No,' to the question, 'Is the predicate value true?'."
1952: Kleene includes a
discussion of the unsolvability of the halting problem for Turing
machines and reformulates it in terms of machines that "eventually
stop", i.e. halt: "... there is no algorithm for deciding whether any
given machine, when started from any given situation, eventually stops."
1952: Martin Davis
uses the term 'halting problem' in a series of lectures at the Control
Systems Laboratory at the University of Illinois in 1952. It is likely
that this is the first such use of the term.
Etymology of the phrase "halting problem"
In
none of his work did Turing use the word "halting" or "termination".
Turing's biographer Hodges does not have the word "halting" or words
"halting problem" in his index. The earliest recorded use of the words
"halting problem" is in a proof by Davis in 1958:
"Theorem 2.2 There exists a Turing machine whose halting problem is recursively unsolvable.
A related problem is the printing problem for a simple Turing machine Z with respect to a symbol Si".
Davis adds no attribution for his proof, so one infers that it is
original with him. But Davis has said that Kleene stated the proof
informally. Copeland states that:
"The halting problem was so named (and it appears, first stated)
by Martin Davis... (It is often said that Turing stated and proved the
halting theorem in 'On Computable Numbers', but strictly this is not
true)."
What is important is that the formalization allows a straightforward mapping of algorithms to some data type that the algorithm can operate upon. For example, if the formalism
lets algorithms define functions over strings (such as Turing machines)
then there should be a mapping of these algorithms to strings, and if
the formalism lets algorithms define functions over natural numbers
(such as computable functions)
then there should be a mapping of algorithms to natural numbers. The
mapping to strings is usually the most straightforward, but strings over
an alphabet with ncharacters can also be mapped to numbers by interpreting them as numbers in an n-ary numeral system.
The conventional representation of decision problems is the set of objects possessing the property in question. The halting set
K = {(i, x) | program i halts when run on input x}
represents the halting problem.
This set is recursively enumerable, which means there is a computable function that lists all of the pairs (i, x) it contains. However, the complement of this set is not recursively enumerable.
There are many equivalent formulations of the halting problem; any set whose Turing degree equals that of the halting problem is such a formulation. Examples of such sets include:
{i | program i eventually halts when run with input 0}
{i | there is an input x such that program i eventually halts when run with input x}.
Proof concept
Christopher Strachey outlined a proof by contradiction that the halting problem is not solvable. The proof proceeds as follows: Suppose that there exists a totalcomputable functionhalts(f) that returns true if the subroutine f halts (when run with no inputs) and returns false otherwise. Now consider the following subroutine:
defg():ifhalts(g):loop_forever()
halts(g) must either return true or false, because halts was assumed to be total. If halts(g) returns true, then g will call loop_forever and never halt, which is a contradiction. If halts(g) returns false, then g will halt, because it will not call loop_forever; this is also a contradiction. Overall, g does the opposite of what halts says g should do, so halts(g) can not return a truth value that is consistent with whether g halts. Therefore, the initial assumption that halts is a total computable function must be false.
Sketch of rigorous proof
The concept above shows the general method of the proof, but the computable function halts does not directly take a subroutine as an argument; instead it takes the source code of a program. Moreover, the definition of g is self-referential. A rigorous proof addresses these issues. The overall goal is to show that there is no totalcomputable function that decides whether an arbitrary program i halts on arbitrary input x; that is, the following function h (for "halts") is not computable:
Here program i refers to the i th program in an enumeration of all the programs of a fixed Turing-complete model of computation.
f(i,j)
i
1
2
3
4
5
6
j
1
1
0
0
1
0
1
2
0
0
0
1
0
0
3
0
1
0
1
0
1
4
1
0
0
1
0
0
5
0
0
0
1
1
1
6
1
1
0
0
1
0
f(i,i)
1
0
0
1
1
0
g(i)
U
0
0
U
U
0
Possible values for a total computable function f arranged in a 2D array. The orange cells are the diagonal. The values of f(i,i) and g(i) are shown at the bottom; U indicates that the function g is undefined for a particular input value.
The proof proceeds by directly establishing that no total computable function with two arguments can be the required function h. As in the sketch of the concept, given any total computable binary function f, the following partial functiong is also computable by some program e:
The verification that g is computable relies on the following constructs (or their equivalents):
computable subprograms (the program that computes f is a subprogram in program e),
duplication of values (program e computes the inputs i,i for f from the input i for g),
conditional branching (program e selects between two results depending on the value it computes for f(i,i)),
not producing a defined result (for example, by looping forever),
returning a value of 0.
The following pseudocode for e illustrates a straightforward way to compute g:
Because g is partial computable, there must be a program e that computes g,
by the assumption that the model of computation is Turing-complete.
This program is one of all the programs on which the halting function h is defined. The next step of the proof shows that h(e,e) will not have the same value as f(e,e).
It follows from the definition of g that exactly one of the following two cases must hold:
f(e,e) = 0 and so g(e) = 0. In this case program e halts on input e, so h(e,e) = 1.
f(e,e) ≠ 0 and so g(e) is undefined. In this case program e does not halt on input e, so h(e,e) = 0.
In either case, f cannot be the same function as h. Because f was an arbitrary total computable function with two arguments, all such functions must differ from h.
This proof is analogous to Cantor's diagonal argument.
One may visualize a two-dimensional array with one column and one row
for each natural number, as indicated in the table above. The value of f(i,j) is placed at column i, row j. Because f is assumed to be a total computable function, any element of the array can be calculated using f. The construction of the function g can be visualized using the main diagonal of this array. If the array has a 0 at position (i,i), then g(i) is 0. Otherwise, g(i) is undefined. The contradiction comes from the fact that there is some column e of the array corresponding to g itself. Now assume f was the halting function h, if g(e) is defined (g(e) = 0 in this case), g(e) halts so f(e,e) = 1. But g(e) = 0 only when f(e,e) = 0, contradicting f(e,e) = 1. Similarly, if g(e) is not defined, then halting function f(e,e) = 0, which leads to g(e) = 0 under g's construction. This contradicts the assumption of g(e) not being defined. In both cases contradiction arises. Therefore any arbitrary computable function f cannot be the halting function h.
A typical method of proving a problem to be undecidable is to reduce it to the halting problem.
For example, there cannot be a general algorithm that decides whether a given statement about natural numbers is true or false. The reason for this is that the proposition
stating that a certain program will halt given a certain input can be
converted into an equivalent statement about natural numbers. If an
algorithm could find the truth value of every statement about natural
numbers, it could certainly find the truth value of this one; but that
would determine whether the original program halts.
Rice's theorem generalizes the theorem that the halting problem is unsolvable. It states that for any
non-trivial property, there is no general decision procedure that, for
all programs, decides whether the partial function implemented by the
input program has that property. (A partial function is a function which
may not always produce a result, and so is used to model programs,
which can either produce results or fail to halt.) For example, the
property "halt for the input 0" is undecidable. Here, "non-trivial"
means that the set of partial functions that satisfy the property is
neither the empty set nor the set of all partial functions. For example,
"halts or fails to halt on input 0" is clearly true of all partial
functions, so it is a trivial property, and can be decided by an
algorithm that simply reports "true." Also, this theorem holds only for
properties of the partial function implemented by the program; Rice's
Theorem does not apply to properties of the program itself. For example,
"halt on input 0 within 100 steps" is not a property of the
partial function that is implemented by the program—it is a property of
the program implementing the partial function and is very much
decidable.
Gregory Chaitin has defined a halting probability, represented by the symbol Ω, a type of real number that informally is said to represent the probability that a randomly produced program halts. These numbers have the same Turing degree as the halting problem. It is a normal and transcendental number which can be defined but cannot be completely computed. This means one can prove that there is no algorithm which produces the digits of Ω, although its first few digits can be calculated in simple cases.
While Turing's proof shows that there can be no general method or
algorithm to determine whether algorithms halt, individual instances of
that problem may very well be susceptible to attack. Given a specific
algorithm, one can often show that it must halt for any input, and in
fact computer scientists often do just that as part of a correctness proof. But each proof has to be developed specifically for the algorithm at hand; there is no mechanical, general way to determine whether algorithms on a Turing machine halt. However, there are some heuristics
that can be used in an automated fashion to attempt to construct a
proof, which succeed frequently on typical programs. This field of
research is known as automated termination analysis.
Since the negative answer to the halting problem shows that there are problems that cannot be solved by a Turing machine, the Church–Turing thesis limits what can be accomplished by any machine that implements effective methods. However, not all machines conceivable to human imagination are subject to the Church–Turing thesis (e.g. oracle machines). It is an open question whether there can be actual deterministic physical processes
that, in the long run, elude simulation by a Turing machine, and in
particular whether any such hypothetical process could usefully be
harnessed in the form of a calculating machine (a hypercomputer)
that could solve the halting problem for a Turing machine amongst other
things. It is also an open question whether any such unknown physical
processes are involved in the working of the human brain, and whether humans can solve the halting problem.
Gödel's incompleteness theorems
The concepts raised by Gödel's incompleteness theorems
are very similar to those raised by the halting problem, and the proofs
are quite similar. In fact, a weaker form of the First Incompleteness
Theorem is an easy consequence of the undecidability of the halting
problem. This weaker form differs from the standard statement of the
incompleteness theorem by asserting that an axiomatization of the natural numbers that is both complete and sound is impossible. The "sound" part is the weakening: it means that we require the axiomatic system in question to prove only true statements about natural numbers. Since soundness implies consistency, this weaker form can be seen as a corollary
of the strong form. It is important to observe that the statement of
the standard form of Gödel's First Incompleteness Theorem is completely
unconcerned with the truth value of a statement, but only concerns the
issue of whether it is possible to find it through a mathematical proof.
The weaker form of the theorem can be proved from the
undecidability of the halting problem as follows. Assume that we have a
sound (and hence consistent) and complete axiomatization of all true first-order logic statements about natural numbers. Then we can build an algorithm that enumerates all these statements. This means that there is an algorithm N(n) that, given a natural number n, computes a true first-order logic statement about natural numbers, and that for all true statements, there is at least one n such that N(n) yields that statement. Now suppose we want to decide if the algorithm with representation a halts on input i. We know that this statement can be expressed with a first-order logic statement, say H(a, i). Since the axiomatization is complete it follows that either there is an n such that N(n) = H(a, i) or there is an n' such that N(n') = ¬ H(a, i). So if we iterate over all n until we either find H(a, i)
or its negation, we will always halt, and furthermore, the answer it
gives us will be true (by soundness). This means that this gives us an
algorithm to decide the halting problem. Since we know that there cannot
be such an algorithm, it follows that the assumption that there is a
consistent and complete axiomatization of all true first-order logic
statements about natural numbers must be false.
Generalization
Many variants of the halting problem can be found in computability textbooks. Typically, these problems are RE-complete and describe sets of complexity in the arithmetical hierarchy, the same as the standard halting problem. The variants are thus undecidable, and the standard halting problem reduces to each variant and vice-versa. However, some variants have a higher degree of unsolvability and cannot be reduced to the standard halting problem. The next two examples are common.
Halting on all inputs
The universal halting problem, also known (in recursion theory) as totality, is the problem of determining, whether a given computer program will halt for every input (the name totality comes from the equivalent question of whether the computed function is total).
This problem is not only undecidable, as the halting problem, but highly undecidable. In terms of the arithmetical hierarchy, it is -complete.
This means, in particular, that it cannot be decided even with an oracle for the halting problem.
Recognizing partial solutions
There
are many programs that, for some inputs, return a correct answer to the
halting problem, while for other inputs they do not return an answer at
all.
However the problem "given program p, is it a partial halting
solver" (in the sense described) is at least as hard as the halting
problem.
To see this, assume that there is an algorithm PHSR ("partial halting
solver recognizer") to do that. Then it can be used to solve the halting
problem,
as follows:
To test whether input program x halts on y, construct a program p that on input (x,y) reports true and diverges on all other inputs.
Then test p with PHSR.
The above argument is a reduction of the halting problem to PHS recognition, and in the same manner,
harder problems such as halting on all inputs can also be reduced, implying that PHS recognition is not only undecidable, but higher in the arithmetical hierarchy, specifically -complete.
Lossy computation
A lossy Turing machine
is a Turing machine in which part of the tape may non-deterministically
disappear. The halting problem is decidable for a lossy Turing machine
but non-primitive recursive.
A machine with an oracle
for the halting problem can determine whether particular Turing
machines will halt on particular inputs, but they cannot determine, in
general, if machines equivalent to themselves will halt.
Africa has the longest record of human habitation in the world. The
first hominins emerged 6-7 million years ago, and among the earliest
anatomically modern human skulls found so far were discovered at Omo Kibish, Jebel Irhoud, and Florisbad.
Africa's prehistory has been largely ignored, with the exception of research into early human evolution. However, it is overseen by the PanAfrican Archaeological Association, whose members consist of professional archaeologists from all over Africa.
Early Stone Age Africa
The
Early Stone Age (ESA), which spanned from approximately 2.6 million
years ago (mya) - 280,000 years ago (ya), describes a period in African
prehistory in which the first stone tools were developed, including both
Oldowan and Acheulean. Early sites along the East African Rift include Lomekwi in the Turkana Basin, Kenya, and Olduvai Gorge farther south in modern-day Tanzania. The earliest hominids were discovered in Ethiopia and titled Ardipithecus ramidus. The diverging species of hominin are known as australopithecines and were first discovered in Olduvai. Australopithecines and their fossils include Paranthropus boisei, the most famous being nicknamed "Zinj" or "Nutcracker man" by Mary Leakey,
the archaeologist who found it. Another older, famous
australopithecine, related to those found at Olduvai Gorge but found
approximately 2000 kilometers to the north east in the Awash Valley of
Ethiopia, is Lucy, who was discovered by Donald Johanson and his team in 1974.
The earliest relative dating for stone tool use was discovered in
2015, by Sonia Harmand, at Lomekwi 3 in West Turkana, Kenya with stone
tools dating to 3.3 million years ago.
Lomekwi tools differ from Oldowan tools in their primitive
technological features making them large and heavy. The Lomekwi are
thought to have been made by Australopithecus afarensis. Prior to this discovery, some of the oldest stone tools were found at Lokalalei 2C in West Turkana, where artifacts exhibiting knapping processes conducted by Australopithecus africanus
date to about 2.34 million years ago, marking the beginning of the ESA.
Incorporation of tools provided early hominins the ability to respond
to changes more readily outside of the immediate needs of daily-life and
extended adaptability behavioral patterns into long-term trends
experienced over generations.
Around a million years later, Homo erectus evolved into a more advanced species and made tools known as the Acheulean handaxes.
These handaxes were a multipurpose bifacial technology that remained
unchanged for thousands of years. The technology demonstrates an
increase in brain development and complexity in Homo erectus, as shown by the increased level of forethought and knowledge of material required for production of the tools. Homo erectus are also associated with the first instances of "modern human living," such as fire, "modern emotions," and art. The earliest evidence for hominins controlling fire is found in Wonderwerk Cave, South Africa. Along with their new technologies, they were also a part of the first "Out of Africa" movement and spread to all parts of the world. This movement took place somewhere around 1.8-0.8 million years ago, where Homo erectus spread out from Africa and into Eurasia. One of the most notable Homo erectus skeletons ever found was that of Nariokotome Boy, who was found near Lake Turkana in Kenya, discovered by Richard Leakey and Kamoya Kimeu.
Nariokotome Boy was a teenager when he died, and his skeleton exhibits
the first evidence for caring in the archaeological record, because he
was cared for through his debilitating scoliosis.
Just recently discovered was a new addition to the line of human ancestors named Homo naledi. Found in Rising Star Cave in South Africa, Homo naledi is undated but has features of both primitive and modern humans.
The Middle Stone Age (MSA), dating to roughly 280,000 to 40,000 years ago, is characterized by the continuation of hunter-gatherer lifestyles and, as more recently recognized, perhaps the origins of modern human behavior and cognition.
Even though hominin species' brains were reorganized and modernized at a
fast rate, the behavior of these hominins did not adapt quite as fast.
This caused the hominin species to be quite primitive. African hunter-gatherers hunted larger mammals and relied on an assortment of edible plants, both in the grasslands that are now the Sahara desert, and the rain forests of Central Africa. Coastal peoples also subsisted on seafood and numerous middens indicate their diet.
Homo sapiens appear for the first time in the archaeological record around 300–270,000 years ago in Africa. They soon developed a more advanced method of flint tool manufacture involving striking flakes from a prepared core. This permitted more control over the size and shape of finished tool and led to the development of composite tools, projectile points and scrapers,
which could be hafted onto spears, arrows or handles. In turn, this
technology permitted more efficient hunting such as that demonstrated by
the Aterian
industry. In eastern Africa, stone tools were made from raw materials
such as quartz and obsidian using the prepared core method, which varied
by region.
It was during the late Middle Pleistocene that many groups began to
migrate away from eastern Africa, especially southward. Technological
improvements such as Aterian methods and the development of new skills
helped these people adapt to new landscapes.
Although still hunter-gatherers, there is evidence that these
early humans also actively managed food resources as well as simply
harvesting them. The Congo Basin
was first occupied around this time; different conditions and diet
there produced recognizably different behaviors and tool types. There
are also the earliest signs of art appearing through the use of ochre as a body decoration and paint, and burial rituals may have been practiced as well.
Evidence of a variety behaviors indicative of Behavioral modernity date to the African Middle Stone age, associated with early Homo sapiens.
Abstract imagery, widened subsistence strategies, and other "modern"
behaviors have been discovered from that period in Africa, especially
South, North, and East Africa. The Blombos Cave site in South Africa, for example, is famous for rectangular slabs of ochre engraved with geometric designs. Using multiple dating techniques, the site was confirmed to be around 77,000 and 100–75,000 years old. Ostrich egg shell containers engraved with geometric designs dating to 60,000 years ago were found at Diepkloof, South Africa.
Beads and other personal ornamentation have been found from Morocco
which might be as much as 130,000 years old; as well, the Cave of
Hearths in South Africa has yielded a number of beads dating from
significantly prior to 50,000 years ago.
These types of ornamentations represent some of the earliest signs of
symbolic behavior amongst human ancestors, including developments in
cognition and social relations. The beads from Bizmoune Cave, in southwest Morocco, are thought to be over 142,000 years old. Shell beads dating to about 75,000 years ago have been found at Blombos Cave, South Africa.
Specialized projectile weapons as well have been found at various
sites in Middle Stone Age Africa, including bone and stone arrowheads
at South African sites such as Sibudu Cave (along with an early bone needle also found at Sibudu) dating approximately 60,000-70,000 years ago, and bone harpoons at the Central African site of Katanda dating to about 90,000 years ago.
Evidence also exists for the systematic heat treating of silcrete stone
to increase its flake-ability for the purpose of toolmaking, beginning
approximately 164,000 years ago at the South African site of Pinnacle Point and becoming common there for the creation of microlithic tools at about 72,000 years ago. Early stone-tipped projectile weapons (a characteristic tool of Homo sapiens), the stone tips of javelins or throwing spears, were discovered in 2013 at the Ethiopian site of Gademotta, and date to around 279,000 years ago.
In 2008, an ochre processing workshop likely for the production
of paints was uncovered dating to ca. 100,000 years ago at Blombos Cave,
South Africa. Analysis shows that a liquefied pigment-rich mixture was
produced and stored in the two abalone shells, and that ochre, bone,
charcoal, grindstones and hammer-stones also formed a composite part of
the toolkits. Evidence for the complexity of the task includes procuring
and combining raw materials from various sources (implying they had a
mental template of the process they would follow), possibly using
pyrotechnology to facilitate fat extraction from bone, using a probable
recipe to produce the compound, and the use of shell containers for
mixing and storage for later use.
Modern behaviors, such as the making of shell beads, bone tools and
arrows, and the use of ochre pigment, are evident at a Kenyan site by
78,000-67,000 years ago.
Expanding subsistence strategies beyond big-game hunting and the
consequential diversity in tool types has been noted as signs of
behavioral modernity. A number of South African sites have shown an
early reliance on aquatic resources from fish to shellfish. Pinnacle Point,
in particular, shows exploitation of marine resources as early as
120,000 years ago, perhaps in response to more arid conditions inland.
Establishing a reliance on predictable shellfish deposits, for example,
could reduce mobility and facilitate complex social systems and
symbolic behavior. Blombos Cave and Site 440 in Sudan both show evidence
of fishing as well. Taphonomic change in fish skeletons from Blombos
Cave have been interpreted as capture of live fish, clearly an
intentional human behavior.
Evidence was found in 2018, dating to about 320,000 years ago, at the Kenyan site of Olorgesailie,
of the early emergence of modern behaviors including: long-distance
trade networks (involving goods such as obsidian), the use of pigments,
and the possible making of projectile points. It is observed by the
authors of three 2018 studies on the site, that the evidence of these
behaviors is approximately contemporary to the earliest known Homo sapiens fossil remains from Africa (such as at Jebel Irhoud and Florisbad), and they suggest that complex and modern behaviors began in Africa around the time of the emergence of Homo sapiens.
In 2019, further evidence of early complex projectile weapons in
Africa was found at Aduma, Ethiopia, dated 80,000–100,000 years ago, in
the form of points considered likely to belong to darts delivered by
spear throwers.
The Hofmeyr Skull is a specimen of a 36,000-year-old human skull that was found in 1952 near Hofmeyr, South Africa. Osteological analysis of the cranium by the Max Planck Institute for Evolutionary Anthropology indicates that the specimen is morphologically distinct from recent groups in subequatorial Africa, including the local Khoisan populations.
The Hofmeyr fossil instead has a very close affinity with other Upper
Paleolithic skulls from Europe. Some scientists have interpreted this
relationship as being consistent with the Out-of-Africa
theory, which hypothesizes that at least some Upper Paleolithic human
groups in Africa, Europe and Asia should morphologically resemble each
other.
Around 10,000 BCE, African hunter-gatherer societies developed microlith
technologies. Composite microlithic tools were useful for harvesting
wild grasses and also permitted the production of fine shell and bone
fish hooks, which may have allowed for the exploitation of a broader
range of food resources. Some of the earliest pottery in Africa has also
been found in the Sahara and is associated with hunter/gatherer
populations.
By 9,400 BCE, in Ounjougou, central Mali, pottery is thought to been
independently invented by local hunter-gatherers as they became more
sedentary and began to intensively gather local wild grains (such as
millet).
Archaeological evidence has attested that population settlements occurred in Nubia as early as the Late Pleistocene
era and from the 5th millennium BC onwards, whereas there is "no or
scanty evidence" of human presence in the Egyptian Nile Valley during
these periods, which may be due to problems in site preservation.
In 2013, Iberomaurusian skeletons from the prehistoric sites of Taforalt and Afalou
in the Maghreb were analyzed for ancient DNA. All of the specimens
belonged to maternal clades associated with either North Africa or the
northern and southern Mediterranean littoral, indicating gene flow
between these areas since the Epipaleolithic. The ancient Taforalt individuals carried the mtDNA haplogroups U6, H, JT and V, which points to population continuity in the region dating from the Iberomaurusian period.
There is an on-going debate in regards to using modern-day hunter-gatherer societies, like the San, as an analogy to societies of the Later Stone Age.
Cultural developments during the early Neolithic led nomadic hunter-gatherer lifestyles to be slowly supplanted by pastoralism in northern Africa.
Africa's earliest evidence for domesticated animals comes from the
Sahara c. 7000-6000 BCE, and evidence for new cattle herding lifestyles
are preserved at both archaeological sites such as Gobero and in Saharan rock art. As the Sahara increased in size due to aridification, early pastoralists migrated south and eastwards into the Niger and Nile valleys, bringing with them herding practices that would also spread throughout eastern and southern Africa. The Savanna Pastoral Neolithic and the Elmenteitan material culture traditions are found in eastern Africa. Recent aDNA research has provided evidence for the spread of Pastoral Neolithic herders from eastern Africa into southern Africa.
In the western Sahel the rise of settled communities occurred largely as a result of the domestication of millet and of sorghum.
Archaeology points to sizable urban populations in West Africa later,
beginning by the 2nd millennium BCE. Symbiotic trade relations developed
before the trans-Saharan trade,
in response to the opportunities afforded by north–south diversity in
ecosystems across deserts, grasslands, and forests. The agriculturists
received salt from the desert nomads. The desert nomads acquired meat
and other foods from pastoralists and farmers of the grasslands and from
fishermen on the Niger River. The forest-dwellers provided furs and meat.
In West Africa, Dhar Tichitt and Oualata in present-day Mauritania
figure prominently among the early urban centers, dated to ~2,000 BCE.
About 500 stone settlements litter the region in the former savannah of
the Sahara. Its inhabitants fished and grew millet. The ancestors of the
Soninke, of the Mandé peoples,
may have been responsible for constructing such settlements. Around 300
BCE the region became more desiccated and the settlements began to
decline, most likely relocating to Koumbi Saleh. Architectural evidence and the comparison of pottery styles suggest that Dhar Tichitt was related to the subsequent Ghana Empire and Djenné-Djenno cultures (in present-day Mali).
Farming societies in Africa developed after the origins and spread of
livestock pastoralism throughout the continent. The early use of
metallurgy by farming communities may have been developed independently
in Africa around 3000-2000 BCE.
Pockets of iron usage appeared in subsequent millennia but metal did
not supplant stone in the south of the continent until around 500 BCE,
when both iron and copper spread southwards through the continent,
reaching the Cape around 200 CE. Although some details regarding the Bantu expansion
are still controversial amongst archaeologists, linguists, and
historians, the widespread use of iron does seem to have played a major
role in the spread of Bantu
farming communities throughout sub-Saharan Africa. Contact and
interaction between hunter/gatherer, pastoralist, and incoming farming
communities remains an important topic of interest in African
archaeology today.
In 2014, ancient DNA analysis of a 2,330-year-old male forager's skeleton in southern Africa found that the specimen belonged to the L0d2c1c mtDNA haplogroup. This maternal clade is today most closely associated with the Ju, a subgroup of the indigenous San people, which points to population continuity in the region. In 2016, a Late Iron Age desiccated mummy from the Tuli region in northern Botswana was also found to belong to haplogroup L0.
In central Nigeria, West Africa, around 1,500 BCE, the Nok culture developed on the Jos Plateau. The Nok people produced lifelike representations in terracotta, including human heads and human figures, elephants, and other animals. By 500 BCE, and possibly a few centuries earlier,
they were smelting iron. By 200 AD the Nok culture had vanished. Based
on stylistic similarities with the Nok terracottas, the bronze figurines
of the Yoruba kingdom of Ife and those of the Bini kingdom of Benin are now believed to be continuations of the traditions of the earlier Nok culture.
Another site in southern Africa that used different types of metal was Bosutswe. The people who lived there used materials such as copper, bronze, and iron. It was proven that this metalworking was the basis for the trade that was responsible for the site's success and kept the power in the ruling Lose class.
Trade with the Near East and Europe led to strong mercantile empires growing such as the Ethiopian kingdom of Axum and Harla Kingdom. Various states and polities also developed in West Africa including Ife, the Kingdom of Benin, Igbo Ukwu, Djenné-Djenno, Ghana Empire, Bono State and the Ashanti Empire. Bantu peoples in southern Africa built the impressive site of Great Zimbabwe
between the 10th and 15th centuries CE. The north of the continent had
close cultural and economic ties with the Classical and medieval Mediterranean. Cattle herding became important in the Horn of Africa and huge earthwork enclosures were built to corral the animals. The people of Christian Ethiopia produced impressive rock-cut monolithic churches such as that of St George at Lalibela during the 13th century and the first Portuguese forts appeared soon after this, penetrating as far south as Zambia.