Search This Blog

Thursday, May 9, 2024

Random-access machine

From Wikipedia, the free encyclopedia

In computer science, random-access machine (RAM or RA-machine) is a model of computation that describes an abstract machine in the general class of register machines. The RA-machine is very similar to the counter machine but with the added capability of 'indirect addressing' of its registers. The 'registers' are intuitively equivalent to main memory of a common computer, except for the additional ability of registers to store natural numbers of any size. Like the counter machine, the RA-machine contains the execution instructions in the finite-state portion of the machine (the so-called Harvard architecture).

The RA-machine's equivalent of the universal Turing machine – with its program in the registers as well as its data – is called the random-access stored-program machine or RASP-machine. It is an example of the so-called von Neumann architecture and is closest to the common notion of a computer.

Together with the Turing machine and counter-machine models, the RA-machine and RASP-machine models are used for computational complexity analysis. Van Emde Boas (1990) calls these three together with the pointer machine, "sequential machine" models, to distinguish them from "parallel random-access machine" models.

Informal description

An RA-machine consists of the following:

  • an infinite number of memory locations called "registers"; each register has an address which is a natural number or zero; each register can store exactly one natural number of any size, or a zero
  • the instruction table, or just "table", containing execution instructions; the exact instruction set varies depending on the author; common instructions include: increment, decrement, clear to zero, copy, conditional jump, halt; other instructions are unnecessary because they can be created by combinations of instructions from the instruction set
  • one special register called the "instruction register" (IR); this register points to the instruction being executed in the instruction table

For a description of a similar concept, but humorous, see the "programming language" Branflakes.

Introduction to the model

The concept of a random-access machine (RAM) starts with the simplest model of all, the so-called counter machine model. Two additions move it away from the counter machine, however. The first enhances the machine with the convenience of indirect addressing; the second moves the model toward the more conventional accumulator-based computer with the addition of one or more auxiliary (dedicated) registers, the most common of which is called "the accumulator".

Formal definition

A random-access machine (RAM) is an abstract computational-machine model identical to a multiple-register counter machine with the addition of indirect addressing. At the discretion of instruction from its finite state machine's TABLE, the machine derives a "target" register's address either (i) directly from the instruction itself, or (ii) indirectly from the contents (e.g. number, label) of the "pointer" register specified in the instruction.

By definition: A register is a location with both an address (a unique, distinguishable designation/locator equivalent to a natural number) and a content – a single natural number. For precision we will use the quasi-formal symbolism from Boolos-Burgess-Jeffrey (2002) to specify a register, its contents, and an operation on a register:

  • [r] means "the contents of register with address r". The label "r" here is a "variable" that can be filled with a natural number or a letter (e.g. "A") or a name.
  • → means "copy/deposit into", or "replaces", but without destruction of the source
Example: [3] +1 → 3; means "The contents of source register with address "3", plus 1, is put into destination register with address "3" (here source and destination are the same place). If [3]=37, that is, the contents of register 3 is the number "37", then 37+1 = 38 will be put into register 3.
Example: [3] → 5; means "The contents of source register with address "3" is put into destination register with address "5". If [3]=38, that is, the contents of register 3 is the number 38, then this number will be put into register 5. The contents of register 3 are not disturbed by this operation, so [3] continues to be 38, now the same as [5].

Definition: A direct instruction is one that specifies in the instruction itself the address of the source or destination register whose contents will be the subject of the instruction. Definition: An indirect instruction is one that specifies a "pointer register", the contents of which is the address of a "target" register. The target register can be either a source or a destination (the various COPY instructions provide examples of this). A register can address itself indirectly.

For want of a standard/convention this article will specify "direct/indirect", abbreviated as "d/i", as a parameter (or parameters) in the instruction:
Example: COPY ( d, A, i, N ) means directly d get the source register's address (register "A") from the instruction itself but indirectly i get the destination address from pointer-register N. Suppose [N]=3, then register 3 is the destination and the instruction will do the following: [A] → 3.

Definition: The contents of source register is used by the instruction. The source register's address can be specified either (i) directly by the instruction, or (ii) indirectly by the pointer register specified by the instruction.

Definition: The contents of the pointer register is the address of the "target" register.

Definition: The contents of the pointer register points to the target register – the "target" may be either a source or a destination register.

Definition: The destination register is where the instruction deposits its result. The source register's address can be specified either (i) directly by the instruction, or (ii) indirectly by the pointer register specified by the instruction. The source and destination registers can be one.

Refresher: The counter-machine model

Melzak (1961) provides an easy visualization of a counter machine: its "registers" are holes in the ground, and these holes hold pebbles. Per an instruction, into and out of these holes "the computer" (person or machine) adds (INCrements) or removes (DECrements) a single pebble. As needed, additional pebbles come from, and excess pebbles go back into, an infinite supply; if the hole is too small to accommodate the pebbles the "computer" digs the hole bigger.
Minsky (1961) and Hopcroft-Ullman 1979 (p. 171) offer the visualization of a multi-tape Turing machine with as many left-ended tapes as "registers". Each tape's length is unbounded to the right, and every square is blank except for the left end, which is marked. The distance of a tape's "head" from its left end, measured in numbers of tape-squares, represents the natural number in "the register". To DECrement the count of squares the tape head moves left; INCrement it moves right. There is no need to print or erase marks on the tape; the only conditional instructions are to check to see if the head is at the left end, by testing a left-end mark with a "Jump-if-marked instruction".
The following instruction "mnemonics" e.g. "CLR (r)" are arbitrary; no standard exists.

The register machine has, for a memory external to its finite-state machine – an unbounded (cf: footnote|countable and unbounded) collection of discrete and uniquely labelled locations with unbounded capacity, called "registers". These registers hold only natural numbers (zero and positive integers). Per a list of sequential instructions in the finite state machine's TABLE, a few (e.g. 2) types of primitive operations operate on the contents of these "registers". Finally, a conditional-expression in the form of an IF-THEN-ELSE is available to test the contents of one or two registers and "branch/jump" the finite state machine out of the default instruction-sequence.

Base model 1: The model closest to Minsky's (1961) visualization and to Lambek (1961):

  • { INCrement contents of register r, DECrement contents of register r, IF contents of register r is Zero THEN Jump to instruction Iz ELSE continue to next instruction }:
Instruction Mnemonic Action on register(s) "r" Action on finite state machine's Instruction Register, IR
INCrement INC ( r ) [r] + 1 → r [IR] + 1 → IR
DECrement DEC ( r ) [r] - 1 → r [IR] + 1 → IR
Jump if Zero JZ ( r, z ) none IF [r] = 0 THEN z → IR ELSE [IR] + 1 → IR
Halt H none [IR] → IR

Base model 2: The "successor" model (named after the successor function of the Peano axioms):

  • { INCrement the contents of register r, CLeaR the contents of register r, IF contents of register rj Equals the contents of register rk THEN Jump to instruction Iz ELSE goto to next instruction }
Instruction Mnemonic Action on register(s) "r" Action on finite state machine's Instruction Register, IR
CLeaR CLR ( r ) 0 → r [IR] + 1 → IR
INCrement INC ( r ) [r] + 1 → r [IR] + 1 → IR
Jump if Equal JE (r1, r2, z) none IF [r1] = [r2] THEN z → IR ELSE [IR] + 1 → IR
Halt H none [IR] → IR

Base model 3: Used by Elgot-Robinson (1964) in their investigation of bounded and unbounded RASPs – the "successor" model with COPY in the place of CLEAR:

  • { INCrement the contents of register r, COPY the contents of register rj to register rk, IF contents of register rj Equals the contents of register rk then Jump to instruction Iz ELSE goto to next instruction }
Instruction Mnemonic Action on register(s) "r" Action on finite state machine's Instruction Register, IR
COPY COPY (r1, r2) [r1] → r2 [IR] + 1 → IR
INCrement INC ( r ) [r] + 1 → r [IR] + 1 → IR
Jump if Equal JE (r1, r2, z) none IF [r1] = [r2] THEN z → IR ELSE [IR] + 1 → IR
Halt H none [IR] → IR

Creating "convenience instructions" from the base sets

The three base sets 1, 2, or 3 above are equivalent in the sense that one can create the instructions of one set using the instructions of another set (an interesting exercise: a hint from Minsky (1967) – declare a reserved register e.g. call it "0" (or Z for "zero" or E for "erase") to contain the number 0). The choice of model will depend on which an author finds easiest to use in a demonstration, or a proof, etc.

Moreover, from base sets 1, 2, or 3 we can create any of the primitive recursive functions ( cf Minsky (1967), Boolos-Burgess-Jeffrey (2002) ). (How to cast the net wider to capture the total and partial mu recursive functions will be discussed in context of indirect addressing). However, building the primitive recursive functions is difficult because the instruction sets are so ... primitive (tiny). One solution is to expand a particular set with "convenience instructions" from another set:

These will not be subroutines in the conventional sense but rather blocks of instructions created from the base set and given a mnemonic. In a formal sense, to use these blocks we need to either (i) "expand" them into their base-instruction equivalents – they will require the use of temporary or "auxiliary" registers so the model must take this into account, or (ii) design our machines/models with the instructions 'built in'.
Example: Base set 1. To create CLR (r) use the block of instructions to count down register r to zero. Observe the use of the hint mentioned above:
  • CLR (r) =equiv
  • loop: JZ (r, exit)
  • DEC (r)
  • JZ (0, loop)
  • exit: etc.

Again, all of this is for convenience only; none of this increases the model's intrinsic power.

For example: the most expanded set would include each unique instruction from the three sets, plus unconditional jump J (z) i.e.:

  • { CLR (r), DEC (r), INC (r), CPY ( rs, rd ), JZ (r, z), JE ( rj, rk, z ), J(z) }

Most authors pick one or the other of the conditional jumps, e.g. Shepherdson-Sturgis (1963) use the above set minus JE (to be perfectly accurate they use JNZ – Jump if Not Zero in place of JZ; yet another possible convenience instruction).

The "indirect" operation

Example of indirect addressing

In our daily lives the notion of an "indirect operation" is not unusual.

Example: A treasure hunt.
At location "Tom_&_Becky's_cave_in_pirate_chest" will be where we can find a map directing us to "the treasure":
(1) We go to location "Tom_&_Becky's_cave..." and dig around until we find a wooden box
(2) Inside the box is a map to the location of the treasure: "under_Thatcher's_front_porch"
(3) We go to location "under_Thatcher's_front_porch", jackhammer away the concrete, and discover "the treasure": a sack of rusty door-knobs.

Indirection specifies a location identified as the pirate chest in "Tom_&_Becky's_cave..." that acts as a pointer to any other location (including itself): its contents (the treasure map) provides the "address" of the target location "under_Thatcher's_front_porch" where the real action is occurring.

Why the need for an indirect operation: Two major problems with the counter-machine model

In the following one must remember that these models are abstract models with two fundamental differences from anything physically real: unbounded numbers of registers each with unbounded capacities. The problem appears most dramatically when one tries to use a counter-machine model to build a RASP that is Turing equivalent and thus compute any partial mu recursive function:

  • Melzak (1961) added indirection to his "hole-and-pebble" model so that his model could modify itself with a "computed goto" and provides two examples of its use ("Decimal representation in the scale of d" and "Sorting by magnitude", whether these are used in his proof that the model is Turing equivalent is unclear since "the program itself is left to the reader as an exercise" (p. 292)). Minsky (1961, 1967) was able to demonstrate that, with suitable (but difficult-to-use) Gödel number encoding, the register model did not need indirection to be Turing equivalent; but it did need at least one unbounded register. As noted below, Minsky (1967) hints at the problem for a RASP but doesn't offer a solution. Elgot and Robinson (1964) proved that their RASP model P0 – it has no indirection capability – cannot compute all "recursive sequential functions" (ones that have parameters of arbitrary length) if it does not have the capability of modifying its own instructions, but it can via Gödel numbers if it does (p. 395-397; in particular figure 2 and footnote p. 395). On the other hand their RASP model P'0 equipped with an "index register" (indirect addressing) can compute all the "partial recursive sequential functions" (the mu recursive functions) (p. 397-398).
Cook and Reckhow (1973) say it most succinctly:
The indirect instructions are necessary in order for a fixed program to access an unbounded number of registers as the inputs vary." (p. 73)
  • Unbounded capacities of registers versus bounded capacities of state-machine instructions: The so-called finite state part of the machine is supposed to be – by the normal definition of algorithm – very finite both in the number of "states" (instructions) and the instructions' sizes (their capacity to hold symbols/signs). So how does a state machine move an arbitrarily large constant directly into a register, e.g. MOVE (k, r) (Move constant k to register r)? If huge constants are necessary they must either start out in the registers themselves or be created by the state machine using a finite number of instructions e.g. multiply and add subroutines using INC and DEC (but not a quasi-infinite number of these!).
Sometimes the constant k will be created by use of CLR ( r ) followed by INC ( r ) repeated k times – e.g. to put the constant k=3 into register r, i.e. 3 → r, so at the end of the instruction [r]=3: CLR (r), INC (r), INC (r), INC (r). This trick is mentioned by Kleene (1952) p. 223. The problem arises when the number to be created exhausts the number of instructions available to the finite state machine; there is always a bigger constant than the number of instructions available to the finite state machine.
  • Unbounded numbers of registers versus bounded state-machine instructions: This is more severe than the first problem. In particular, this problem arises when we attempt to build a so-called RASP, a "universal machine" (see more at Universal Turing machine) that uses its finite-state machine to interpret a "program of instructions" located in its registers – i.e. we are building what is nowadays called a computer with the von Neumann architecture.
Observe that the counter machine's finite state machine must call out a register explicitly (directly) by its name/number: INC (65,356) calls out register number "65,365" explicitly. If the number of registers exceeds the capability of the finite state machine to address them, then registers outside the bounds will be unreachable. For example, if the finite state machine can only reach 65,536 = 216 registers then how can it reach the 65,537th?

So how do we address a register beyond the bounds of the finite state machine? One approach would be to modify the program-instructions (the ones stored in the registers) so that they contain more than one command. But this too can be exhausted unless an instruction is of (potentially) unbounded size. So why not use just one "über-instruction" – one really really big number – that contains all the program instructions encoded into it! This is how Minsky solves the problem, but the Gödel numbering he uses represents a great inconvenience to the model, and the result is nothing at all like our intuitive notion of a "stored program computer".

Elgot and Robinson (1964) come to a similar conclusion with respect to a RASP that is "finitely determined". Indeed it can access an unbounded number of registers (e.g. to fetch instructions from them) but only if the RASP allows "self modification" of its program instructions, and has encoded its "data" in a Gödel number (Fig. 2 p. 396).

In the context of a more computer-like model using his RPT (repeat) instruction Minsky (1967) tantalizes us with a solution to the problem (cf p. 214, p. 259) but offers no firm resolution. He asserts:

"In general a RPT operation could not be an instruction in the finite-state part of the machine ... this might exhaust any particular amount of storage allowed in the finite part of the computer [sic, his name for his RAM models]. RPT operations require infinite registers of their own." (p. 214).

He offers us a bounded RPT that together with CLR (r) and INC (r) can compute any primitive recursive function, and he offers the unbounded RPT quoted above that as playing the role of μ operator; it together with CLR (r) and INC (r) can compute the mu recursive functions. But he does not discuss "indirection" or the RAM model per se.

From the references in Hartmanis (1971) it appears that Cook (in his lecture notes while at UC Berkeley, 1970) has firmed up the notion of indirect addressing. This becomes clearer in the paper of Cook and Reckhow (1973) – Cook is Reckhow's Master's thesis advisor. Hartmanis' model – quite similar to Melzak's (1961) model – uses two and three-register adds and subtracts and two parameter copies; Cook and Reckhow's model reduce the number of parameters (registers called out in the program instructions) to one call-out by use of an accumulator "AC".

The solution in a nutshell: Design our machine/model with unbounded indirection – provide an unbounded "address" register that can potentially name (call out) any register no matter how many there are. For this to work, in general, the unbounded register requires an ability to be cleared and then incremented (and, possibly, decremented) by a potentially infinite loop. In this sense the solution represents the unbounded μ operator that can, if necessary, hunt ad infinitum along the unbounded string of registers until it finds what it is looking for. The pointer register is exactly like any other register with one exception: under the circumstances called "indirect addressing" it provides its contents, rather than the address-operand in the state machine's TABLE, to be the address of the target register (including possibly itself!).

Bounded indirection and the primitive recursive functions

If we eschew the Minsky approach of one monster number in one register, and specify that our machine model will be "like a computer" we have to confront this problem of indirection if we are to compute the recursive functions (also called the μ-recursive functions ) – both total and partial varieties.

Our simpler counter-machine model can do a "bounded" form of indirection – and thereby compute the sub-class of primitive recursive functions – by using a primitive recursive "operator" called "definition by cases" (defined in Kleene (1952) p. 229 and Boolos-Burgess-Jeffrey p. 74). Such a "bounded indirection" is a laborious, tedious affair. "Definition by cases" requires the machine to determine/distinguish the contents of the pointer register by attempting, time after time until success, to match this contents against a number/name that the case operator explicitly declares. Thus the definition by cases starts from e.g. the lower bound address and continues ad nauseam toward the upper bound address attempting to make a match:

Is the number in register N equal to 0? If not then is it equal to 1? 2? 3? ... 65364? If not then we're at the last number 65365 and this had better be the one, else we have a problem!

"Bounded" indirection will not allow us to compute the partial recursive functions – for those we need unbounded indirection aka the μ operator.

Suppose we had been able to continue on to number 65367, and in fact that register had what we were looking for. Then we could have completed our calculation successfully! But suppose 65367 didn't have what we needed. How far should we continue to go?

To be Turing equivalent the counter machine needs to either use the unfortunate single-register Minsky Gödel number method, or be augmented with an ability to explore the ends of its register string, ad infinitum if necessary. (A failure to find something "out there" defines what it means for an algorithm to fail to terminate; cf Kleene (1952) pp. 316ff Chapter XII Partial Recursive Functions, in particular p. 323-325.) See more on this in the example below.

Unbounded indirection and the partial recursive functions

For unbounded indirection we require a "hardware" change in our machine model. Once we make this change the model is no longer a counter machine, but rather a random-access machine.

Now when e.g. INC is specified, the finite state machine's instruction will have to specify where the address of the register of interest will come from. This where can be either (i) the state machine's instruction that provides an explicit label, or (ii) the pointer-register whose contents is the address of interest. Whenever an instruction specifies a register address it now will also need to specify an additional parameter "i/d" – "indirect/direct". In a sense this new "i/d" parameter is a "switch" that flips one way to get the direct address as specified in the instruction or the other way to get the indirect address from the pointer register (which pointer register – in some models every register can be a pointer register – is specified by the instruction). This "mutually exclusive but exhaustive choice" is yet another example of "definition by cases", and the arithmetic equivalent shown in the example below is derived from the definition in Kleene (1952) p. 229.

Example: CPY ( indirectsource, rsource, directdestination, rdestination )
Assign a code to specify direct addressing as d="0" and indirect addressing as i="1". Then our machine can determine the source address as follows:
i*[rs] + (1-i)*rs
For example, suppose the contents of register 3 are "5" (i.e. [3]=5 ) and the contents of register 4 are "2" (i.e. [4]=2 ):
Example: CPY ( 1, 3, 0, 4 ) = CPY ( indirect, reg 3, direct, reg 4 )
1*[3] + 0*3 = [3] = source-register address 5
0*[4] + 1*4 = 4 = destination-register address 4
Example: CPY ( 0, 3, 0, 4 )
0*[3] + 1*3 = 3 = source-register address 3
0*[4] + 1*4 = 4 = destination-register address 4
Example: CPY ( 0, 3, 1, 4 )
0*[3] + 1*3 = 3 = source-register address 3
1*[4] + 0*4 = [4] = destination-register address 2

The indirect COPY instruction

Probably the most useful of the added instructions is COPY. Indeed, Elgot-Robinson (1964) provide their models P0 and P'0 with the COPY instructions, and Cook-Reckhow (1973) provide their accumulator-based model with only two indirect instructions – COPY to accumulator indirectly, COPY from accumulator indirectly.

A plethora of instructions: Because any instruction acting on a single register can be augmented with its indirect "dual" (including conditional and unconditional jumps, cf the Elgot-Robinson model), the inclusion of indirect instructions will double the number of single parameter/register instructions (e.g. INC (d, r), INC (i, r)). Worse, every two parameter/register instruction will have 4 possible varieties, e.g.:

CPY (d, rs, d, rd ) = COPY directly from source-register directly to destination-register
CPY (i, rsp, d, rd ) = COPY to destination-register indirectly using the source address to be found in the source-pointer register rsp.
CPY (d, rs, i, rdp ) = COPY contents of source-register indirectly into register using destination address to be found in the destination-pointer register rdp.
CPY (i, rsp, i, rdp ) = COPY indirectly the contents of the source register with address to be found in source-pointer register rsp, into the destination register with address to be found in the destination-pointer register rdp)

In a similar manner every three-register instruction that involves two source registers rs1 rs2 and a destination register rd will result in 8 varieties, for example the addition:

[rs1] + [rs2] → rd

will yield:

  • ADD ( d, rs1, d, rs2, d, rd )
  • ADD ( i, rsp1, d, rs2, d, rd )
  • ADD ( d, rs1, i, rsp2, d, rd )
  • ADD ( i, rsp1, i, rsp2, d, rd )
  • ADD ( d, rs1, d, rs2, i, rdp )
  • ADD ( i, rsp1, d, rs2, i, rdp )
  • ADD ( d, rs1, i, rsp2, i, rdp )
  • ADD ( i, rsp1, i, rsp2, i, rdp )

If we designate one register to be the "accumulator" (see below) and place strong restrictions on the various instructions allowed then we can greatly reduce the plethora of direct and indirect operations. However, one must be sure that the resulting reduced instruction-set is sufficient, and we must be aware that the reduction will come at the expense of more instructions per "significant" operation.

The notion of "accumulator A"

Historical convention dedicates a register to the accumulator, an "arithmetic organ" that literally accumulates its number during a sequence of arithmetic operations:

"The first part of our arithmetic organ ... should be a parallel storage organ which can receive a number and add it to the one already in it, which is also able to clear its contents and which can store what it contains. We will call such an organ an Accumulator. It is quite conventional in principle in past and present computing machines of the most varied types, e.g. desk multipliers, standard IBM counters, more modern relay machines, the ENIAC" (boldface in original: Goldstine and von Neumann, 1946; p. 98 in Bell and Newell 1971).

However, the accumulator comes at the expense of more instructions per arithmetic "operation", in particular with respect to what are called 'read-modify-write' instructions such as "Increment indirectly the contents of the register pointed to by register r2 ". "A" designates the "accumulator" register A:

Label Instruction
A r2 r378,426 Description



. . . 378,426 17
INCi ( r2 ): CPY ( i, r2, d, A )
17 378,426 17 Contents of r2 points to r378,426 with contents "17": copy this to A

INC ( A )
18 378,426 17 Incement contents of A

CPY ( d, A, i, r2 )
18 378,426 18 Contents of r2 points to r378,426: copy contents of A into r378,426

If we stick with a specific name for the accumulator, e.g. "A", we can imply the accumulator in the instructions, for example,

INC ( A ) = INCA

However, when we write the CPY instructions without the accumulator called out the instructions are ambiguous or they must have empty parameters:

CPY ( d, r2, d, A ) = CPY (d, r2, , )
CPY ( d, A, d, r2 ) = CPY ( , , d, r2)

Historically what has happened is these two CPY instructions have received distinctive names; however, no convention exists. Tradition (e.g. Knuth's (1973) imaginary MIX computer) uses two names called LOAD and STORE. Here we are adding the "i/d" parameter:

LDA ( d/i, rs ) =def CPY ( d/i, rs, d, A )
STA ( d/i, rd ) =def CPY ( d, A, d/i, rd )

The typical accumulator-based model will have all its two-variable arithmetic and constant operations (e.g. ADD (A, r), SUB (A, r) ) use (i) the accumulator's contents, together with (ii) a specified register's contents. The one-variable operations (e.g. INC (A), DEC (A) and CLR (A) ) require only the accumulator. Both instruction-types deposit the result (e.g. sum, difference, product, quotient or remainder) in the accumulator.

Example: INCA = [A] +1 → A
Example: ADDA (rs) = [A] + [rs] → A
Example: MULA (rs) = [A] * [rs] → A

If we so choose, we can abbreviate the mnemonics because at least one source-register and the destination register is always the accumulator A. Thus we have :

{ LDA (i/d, rs), STA (i/d, rd), CLRA, INCA, DECA, ADDA (rs), SUBA (rs), MULA (rs), DIVA (rs), etc.)

The notion of indirect address register "N"

If our model has an unbounded accumulator can we bound all the other registers? Not until we provide for at least one unbounded register from which we derive our indirect addresses.

The minimalist approach is to use itself (Schönhage does this).

Another approach (Schönhage does this too) is to declare a specific register the "indirect address register" and confine indirection relative to this register (Schonhage's RAM0 model uses both A and N registers for indirect as well as direct instructions). Again our new register has no conventional name – perhaps "N" from "iNdex", or "iNdirect" or "address Number".

For maximum flexibility, as we have done for the accumulator A – we will consider N just another register subject to increment, decrement, clear, test, direct copy, etc. Again we can shrink the instruction to a single-parameter that provides for direction and indirection, for example.

LDAN (i/d) = CPY (i/d, N, d, A); LoaD Accumulator via iNdirection register
STAN (i/d) = CPY (d, A, i/d, N). STore Accumulator via iNdirection register

Why is this such an interesting approach? At least two reasons:

(1) An instruction set with no parameters:

Schönhage does this to produce his RAM0 instruction set. See section below.

(2) Reduce a RAM to a Post-Turing machine:

Posing as minimalists, we reduce all the registers excepting the accumulator A and indirection register N e.g. r = { r0, r1, r2, ... } to an unbounded string of (very-) bounded-capacity pigeon-holes. These will do nothing but hold (very-) bounded numbers e.g. a lone bit with value { 0, 1 }. Likewise we shrink the accumulator to a single bit. We restrict any arithmetic to the registers { A, N }, use indirect operations to pull the contents of registers into the accumulator and write 0 or 1 from the accumulator to a register:

{ LDA (i, N), STA (i, N), CLR (A/N), INC (A/N), DEC(N), JZ (A/N, Iz), JZ (Iz), H }

We push further and eliminate A altogether by the use of two "constant" registers called "ERASE" and "PRINT": [ERASE]=0, [PRINT]=1.

{ CPY (d, ERASE, i, N), CPY (d, PRINT, i, N), CLR (N), INC (N), DEC (N), JZ (i, N, Iz), JZ (Iz), H }

Rename the COPY instructions and call INC (N) = RIGHT, DEC (N) = LEFT and we have the same instructions as the Post-Turing machine, plus an extra CLRN :

{ ERASE, PRINT, CLRN, RIGHT, LEFT, JZ (i, N, Iz), JZ (Iz), H }

Turing equivalence of the RAM with indirection

In the section above we informally showed that a RAM with an unbounded indirection capability produces a Post–Turing machine. The Post–Turing machine is Turing equivalent, so we have shown that the RAM with indirection is Turing equivalent.

We give here a slightly more formal demonstration. Begin by designing our model with three reserved registers "E", "P", and "N", plus an unbounded set of registers 1, 2, ..., n to the right. The registers 1, 2, ..., n will be considered "the squares of the tape". Register "N" points to "the scanned square" that "the head" is currently observing. The "head" can be thought of as being in the conditional jump – observe that it uses indirect addressing (cf Elgot-Robinson p. 398). As we decrement or increment "N" the (apparent) head will "move left" or "right" along the squares. We will move the contents of "E"=0 or "P"=1 to the "scanned square" as pointed to by N, using the indirect CPY.

The fact that our tape is left-ended presents us with a minor problem: Whenever LEFT occurs our instructions will have to test to determine whether or not the contents of "N" is zero; if so we should leave its count at "0" (this is our choice as designers – for example we might have the machine/model "trigger an event" of our choosing).

Instruction set 1 (augmented): { INC (N), DEC (N), CLR (N), CPY (d, rs,i, N), JZ ( i, r, z ), HALT }

The following table both defines the Post-Turing instructions in terms of their RAM equivalent instructions and gives an example of their functioning. The (apparent)location of the head along the tape of registers r0-r5 . . . is shown shaded:

Mnemonic label:

E P N
r0 r1 r2 r3 r4 r5 etc. Action on registers Action on finite state machine Instruction Register IR


















start:

0 1 3



1 0




















R right: INC ( N )
0 1 4



1 0

[N] +1 → N [IR] +1 → IR

















P print: CPY ( d, P, i, N )
0 1 4



1 1

[P]=1 → [N]=r4 [IR] +1 → IR

















E erase: CPY ( d, E, i, N )
0 1 4



1 0

[E]=0 → [N]=r4 [IR] +1 → IR

















L left: JZ ( i, N, end )
0 1 4



1 0

none IF N =r4] =0 THEN "end" → IR else [IR]+1 → IR


DEC ( N )
0 1 3



1 0

[N] -1 → N

















J0 ( halt ) jump_if_blank: JZ ( i, N, end )
0 1 3



1 0

none IF N =r3] =0 THEN "end" → IR else [IR]+1 → IR

















J1 ( halt ) jump_if_mark: JZ ( i, N, halt )
0 1 3



1 0

N =r3] → A IF N =r3] =0 THEN "end" → IR else [IR]+1 → IR

end . . . etc.
0 1 3



1 0





















halt: H
0 1 3



1 0

none [IR] +1 → IR

Example: Bounded indirection yields a machine that is not Turing equivalent

Throughout this demonstration we have to keep in mind that the instructions in the finite state machine's TABLE is bounded, i.e. finite:

"Besides a merely being a finite set of rules which gives a sequence of operations for solving a specific type of problem, an algorithm has five important features [Finiteness, Definiteness, Input, Output, Effectiveness]" (italics added, Knuth p. 4-7).
The difficulty arises because the registers have explicit "names" (numbers) and our machine must call each out by name in order to "access" it.

We will build the indirect CPY ( i, q, d, φ ) with the CASE operator. The address of the target register will be specified by the contents of register "q"; once the CASE operator has determined what this number is, CPY will directly deposit the contents of the register with that number into register "φ". We will need an additional register that we will call "y" – it serves as an up-counter.

So the following is actually a constructive demonstration or proof that we can indeed simulate the indirect CPY ( i, q, d, φ ) without a "hardware" design change to our counter machine/model. However, note that because this indirect CPY is "bounded" by the size/extent of the finite state machine, a RASP using this indirect CPY can only calculate the primitive recursive functions, not the full suite of mu recursive functions.

The CASE "operator" is described in Kleene (1952) (p. 229) and in Boolos-Burgess-Jeffrey (2002) (p. 74); the latter authors emphasize its utility. The following definition is per Kleene but modified to reflect the familiar "IF-THEN-ELSE" construction.

The CASE operator "returns" a natural number into φ depending on which "case" is satisfied, starting with "case_0" and going successively through "case_last"; if no case is satisfied then the number called "default" (aka "woops") is returned into φ (here x designates some selection of parameters, e.g. register q and the string r0, ... rlast )):

Definition by cases φ (x, y):

  • case_0: IF Q0(x, y) is true THEN φ0(x, y) ELSE
  • case_1: IF Q1(x, y) is true THEN φ1(x, y) ELSE
  • cases_2 through case_next_to_last: etc. . . . . . . . . ELSE
  • case_last: IF Qlast(x, y) is true THEN φlast(x, y) ELSE
  • default: do φdefault(x, y)

Kleene require that the "predicates" Qn that doing the testing are all mutually exclusive – "predicates" are functions that produce only { true, false } for output; Boolos-Burgess-Jeffrey add the requirement that the cases are "exhaustive".

We begin with a number in register q that represents the address of the target register. But what is this number? The "predicates" will test it to find out, one trial after another: JE (q, y, z) followed by INC (y). Once the number is identified explicitly, the CASE operator directly/explicitly copies the contents of this register to φ:

Definition by cases CPY (i, q, d, φ) =def φ (q, r0, ..., rlast, y) =
  • case_0: IF CLR (y), [q] - [y]=0 THEN CPY ( r0, φ ), J (exit) ELSE
  • case_1: IF INC (y), [q] = [y]=1 THEN CPY ( r1, φ ), J (exit) ELSE
  • case_2 through case n: IF . . . THEN . . . ELSE
  • case_n: IF INC (y), [q] = [y]=n THEN CPY ( rn, φ ), J (exit) ELSE
  • case_n+1 to case_last: IF . . . THEN . . . ELSE
  • case_last: IF INC (y), [q] = [y]="last" THEN CPY ( rlast, φ ), J (exit) ELSE
  • default: woops

Case_0 ( the base step of the recursion on y) looks like this:

  • case_0:
  • CLR ( y ) ; set register y = 0
  • JE ( q, y, _φ0 )
  • J ( case_1 )
  • _φ0: CPY ( r0, φ )
  • J ( exit )
  • case_1: etc.

Case_n (the induction step) looks like this; remember, each instance of "n", "n+1", ..., "last" must be an explicit natural number:

  • case_n:
  • INC ( y )
  • JE ( q, y, _φn )
  • J ( case_n+1)
  • _φn: CPY ( rn, φ )
  • J ( exit )
  • case__n+1: etc.

Case_last stops the induction and bounds the CASE operator (and thereby bounds the "indirect copy" operator):

  • case_last:
  • INC ( y )
  • JE ( q, y, _φlast )
  • J ( woops )
  • _φlast: CPY ( rlast, φ )
  • J ( exit )
  • woops: how do we handle an out-of-bounds attempt?
  • exit: etc.

If the CASE could continue ad infinitum it would be the mu operator. But it can't – its finite state machine's "state register" has reached its maximum count (e.g. 65365 = 11111111,111111112 ) or its table has run out of instructions; it is a finite machine, after all.

Examples of models

Register-to-register ("read-modify-write") model of Cook and Reckhow (1973)

The commonly encountered Cook and Rechkow model is a bit like the ternary-register Malzek model (written with Knuth mnemonics – the original instructions had no mnemonics excepting TRA, Read, Print).

  • LOAD ( C, rd ) ; C → rd, C is any integer
Example: LOAD ( 0, 5 ) will clear register 5.
  • ADD ( rs1, rs2, rd ) ; [rs1] + [rs2] → rd, the registers can be the same or different;
Example: ADD ( A, A, A ) will double the contents of register A.
  • SUB ( rs1, rs2, rd ) ; [rs1] - [rs2] → rd, the registers can be the same or different:
Example: SUB ( 3, 3, 3 ) will clear register 3.
  • COPY ( i, rp, d, rd ) ; [[rp] ] → rd, Indirectly copy the contents of the source-register pointed to by pointer-register rp into the destination register.
  • COPY ( d, rs, i, rp ) ; [rs] → [rp]. Copy the contents of source register rs into the destination-register pointed to by the pointer-register rp.
  • JNZ ( r, Iz ) ; Conditional jump if [r] is positive; i.e. IF [r] > 0 THEN jump to instruction z else continue in sequence (Cook and Reckhow call this: "TRAnsfer control to line m if Xj > 0")
  • READ ( rd ) ; copy "the input" into destination register rd
  • PRINT ( rs ) ; copy the contents of source register rs to "the output."

Schönhage's RAM0 and RAM1 (1980)

Schönhage (1980) describes a very primitive, atomized model chosen for his proof of the equivalence of his SMM pointer machine model:

"In order to avoid any explicit addressing the RAM0 has the accumulator with contents z and an additional address register with current contents n (initially 0)" (p. 494)

RAM1 model: Schönhage demonstrates how his construction can be used to form the more common, usable form of "successor"-like RAM (using this article's mnemonics):

  • LDA k ; k --> A , k is a constant, an explicit number such as "47"
  • LDA ( d, r ) ; [r] → A ; directly load A
  • LDA ( i, r ) ; [[r]] → A ; indirectly load A
  • STA ( d, r ) ; [A] → r ; directly store A
  • STA ( i, r ) ; [A] → [r] ; indirectly store A
  • JEA ( r, z ) ; IF [A] = [r] then Iz else continue
  • INCA ; [A] + 1 --> A

RAM0 model: Schönhage's RAM0 machine has 6 instructions indicated by a single letter (the 6th "C xxx" seems to involve 'skip over next parameter'. Schönhage designated the accumulator with "z", "N" with "n", etc. Rather than Schönhage's mnemonics we will use the mnemonics developed above.

  • (Z), CLRA: 0 → A
  • (A), INCA: [A] +1 → A
  • (N), CPYAN: [A] → N
  • (A), LDAA: [[A]] → A ; contents of A points to register address; put register's contents into A
  • (S), STAN: [A] → [N] ; contents of N points to register address; put contents of A into register pointed to by N
  • (C), JAZ ( z ): [A] = 0 then go to Iz ; ambiguous in his treatment

Indirection comes (i) from CPYAN (copy/transfer contents A to N) working with store_A_via_N STAN, and from (ii) the peculiar indirection instruction LDAA ( [[A]] → [A] ).

Footnotes

Finite vs unbounded

The definitional fact that any sort of counter machine without an unbounded register-"address" register must specify a register "r" by name indicates that the model requires "r" to be finite, although it is "unbounded" in the sense that the model implies no upper limit to the number of registers necessary to do its job(s). For example, we do not require r < 83,617,563,821,029,283,746 nor r < 2^1,000,001, etc.

Thus our model can "expand" the number of registers, if necessary to perform a certain computation. However this does mean that whatever number the model expands to must be finite – it must be indexable with a natural number: ω is not an option.

We can escape this restriction by providing an unbounded register to provide the address of the register that specifies an indirect address.

Vertebral column

From Wikipedia, the free encyclopedia
Vertebral column
The human vertebral column and its regions
 
Vertebral column of a goat

The vertebral column, also known as the spinal column, spine or backbone, is the core part of the axial skeleton in vertebrate animals. The vertebral column is the defining and eponymous characteristic of the vertebrate endoskeleton, where the notochord (an elastic collagen-wrapped glycoprotein rod) found in all chordates has been replaced by a segmented series of mineralized irregular bones (or sometimes, cartilages) called vertebrae, separated by fibrocartilaginous intervertebral discs (the center of which is a notochord remnant). The dorsal portion of the vertebral column houses the spinal canal, an elongated cavity formed by alignment of the vertebral neural arches that encloses and protects the spinal cord, with spinal nerves exiting via the intervertebral foramina to innervate each body segments.

There are around 50,000 species of animals that have a vertebral column. The human spine is one of the most-studied examples, as the general structure of human vertebrae is fairly typical (homologous) of that found in other mammals, reptiles and birds. The shape of the vertebral body does, however, vary somewhat between different groups of living species.

Individual vertebrae are named according to their corresponding body region (neck, thorax, abdomen, pelvis or tail). In clinical medicine, features on vertebrae (particularly the spinous process) can be used as surface landmarks to guide medical procedures such as lumbar punctures and spinal anesthesia. There are also many different spinal diseases in humans that can affect both the bony vertebrae and the intervertebral discs, with kyphosis/scoliosis, ankylosing spondylitis, degenerative discs and spina bifida being recognizable examples.

Structure

The number of vertebrae in a region can vary but overall the number remains the same. In a human vertebral column, there are normally 33 vertebrae. The upper 24 pre-sacral vertebrae are articulating and separated from each other by intervertebral discs, and the lower nine are fused in adults, five in the sacrum and four in the coccyx, or tailbone. The articulating vertebrae are named according to their region of the spine. There are 7 cervical vertebrae, 12 thoracic vertebrae and 5 lumbar vertebrae. The number of those in the cervical region, however, is only rarely changed, while that in the coccygeal region varies most. Excluding rare deviations, the total number of vertebrae ranges from 32 to 35. In about 10% of people, both the total number of pre-sacral vertebrae and the number of vertebrae in individual parts of the spine can vary. The most frequent deviations are: 11 (rarely 13) thoracic vertebrae, 4 or 6 lumbar vertebrae, 3 or 5 coccygeal vertebrae (rarely up to 7).

There are numerous ligaments extending the length of the column, which include the anterior and posterior longitudinal ligaments at the front and back of the vertebral bodies, the ligamentum flavum in deep to the laminae, the interspinous and supraspinous ligaments between spinous processes, and the intertransverse ligaments between the transverse processes.

Vertebrae

Numbering order of the vertebrae of the human spinal column

The vertebrae in the human vertebral column is divided into different body regions, which correspond to the curvatures of the vertebral column. The articulating vertebrae are named according to their region of the spine. Vertebrae in these regions are essentially alike, with minor variation. These regions are called the cervical spine, thoracic spine, lumbar spine, sacrum, and coccyx. There are seven cervical vertebrae, twelve thoracic vertebrae, and five lumbar vertebrae.

The number of vertebrae in a region can vary but overall the number remains the same. The number of those in the cervical region, however, is only rarely changed. The vertebrae of the cervical, thoracic, and lumbar spines are independent bones and generally quite similar. The vertebrae of the sacrum and coccyx are usually fused and unable to move independently. Two special vertebrae are the atlas and axis, on which the head rests.

Anatomy of a vertebra

A typical vertebra consists of two parts: the vertebral body (or centrum), which is ventral (or anterior, in the standard anatomical position) and withstands axial structural load; and the vertebral arch (also known as neural arch), which is dorsal (or posterior) and provides articulations and anchorages for ribs and core skeletal muscles. Together, these enclose the vertebral foramen, the series of which align to form the spinal canal, a body cavity that contains the spinal cord. Because the vertebral column will outgrow the spinal cord during child development, by adulthood the spinal cord often ends at the upper lumbar spine (at around L1/L2 level), the lower (caudal) end of the spinal canal is occupied by a ponytail-like bundle of spinal nerves descriptively called cauda equina (from Latin "horse's tail"), and the sacrum and coccyx are fused without a central foramen.

The vertebral arch is formed by a ventral pair of pedicles and a dorsal pair of laminae, and supports seven processes, four articular, two transverse and one spinous, the latter also being known as the neural spine. The transverse and spinous processes and their associated ligaments serve as important attachment sites for back and paraspinal muscles and the thoracolumbar fasciae. The spinous processes of the cervical and lumbar regions can be felt through the skin, and are important surface landmarks in clinical medicine.

The four articular processes for two pairs of plane facet joints above and below each vertebra, articulating with those of the adjacent vertebrae and are joined by a thin portion of the neural arch called the pars interarticularis. The orientation of the facet joints restricts the range of motion between the vertebrae. Underneath each pedicle is a small hole (enclosed by the pedicle of the vertebral below) called intervertebral foramen, which transmit the corresponding spinal nerve and dorsal root ganglion that exit the spinal canal.

From top to bottom, the vertebrae are:

Combined vertebral regions

For some medical purposes, adjacent vertebral regions may be considered together:

  • Cervicothoracic spine (or region or division): the combined region of the cervical vertebrae and the thoracic vertebrae
  • Thoracolumbar spine (or region or division): the combined region of the thoracic vertebrae and the lumbar vertebrae
  • Lumbosacral spine (or region or division): the combined region of the lumbar vertebrae and the sacral vertebrae

Shape

The vertebral column is curved in several places, a result of human bipedal evolution. These curves increase the vertebral column's strength, flexibility, and ability to absorb shock, stabilising the body in upright position. When the load on the spine is increased, the curvatures increase in depth (become more curved) to accommodate the extra weight. They then spring back when the weight is removed.

The upper cervical spine has a curve, convex forward, that begins at the axis (second cervical vertebra) at the apex of the odontoid process or dens and ends at the middle of the second thoracic vertebra; it is the least marked of all the curves. This inward curve is known as a lordotic curve.

A thoracic spine X-ray of a 57-year-old male.

The thoracic curve, concave forward, begins at the middle of the second and ends at the middle of the twelfth thoracic vertebra. Its most prominent point behind corresponds to the spinous process of the seventh thoracic vertebra. This curve is known as a kyphotic curve.

Lateral lumbar X-ray of a 34-year-old male

The lumbar curve is more marked in the female than in the male; it begins at the middle of the last thoracic vertebra, and ends at the sacrovertebral angle. It is convex anteriorly, the convexity of the lower three vertebrae being much greater than that of the upper two. This curve is described as a lordotic curve.

The sacral curve begins at the sacrovertebral articulation, and ends at the point of the coccyx; its concavity is directed downward and forward as a kyphotic curve.

The thoracic and sacral kyphotic curves are termed primary curves, because they are present in the fetus. The cervical and lumbar curves are compensatory, or secondary, and are developed after birth. The cervical curve forms when the infant is able to hold up its head (at three or four months) and sit upright (at nine months). The lumbar curve forms later from twelve to eighteen months, when the child begins to walk.

Surfaces

Anterior surface

When viewed from in front, the width of the bodies of the vertebrae is seen to increase from the second cervical to the first thoracic; there is then a slight diminution in the next three vertebrae. Below this, there is again a gradual and progressive increase in width as low as the sacrovertebral angle. From this point there is a rapid diminution, to the apex of the coccyx.

Posterior surface

From behind, the vertebral column presents in the median line the spinous processes. In the cervical region (with the exception of the second and seventh vertebrae), these are short, horizontal, and bifid. In the upper part of the thoracic region they are directed obliquely downward; in the middle they are almost vertical, and in the lower part they are nearly horizontal. In the lumbar region they are nearly horizontal. The spinous processes are separated by considerable intervals in the lumbar region, by narrower intervals in the neck, and are closely approximated in the middle of the thoracic region. Occasionally one of these processes deviates a little from the median line — which can sometimes be indicative of a fracture or a displacement of the spine. On either side of the spinous processes is the vertebral groove formed by the laminae in the cervical and lumbar regions, where it is shallow, and by the laminae and transverse processes in the thoracic region, where it is deep and broad; these grooves lodge the deep muscles of the back. Lateral to the spinous processes are the articular processes, and still more laterally the transverse processes. In the thoracic region, the transverse processes stand backward, on a plane considerably behind that of the same processes in the cervical and lumbar regions. In the cervical region, the transverse processes are placed in front of the articular processes, lateral to the pedicles and between the intervertebral foramina. In the thoracic region they are posterior to the pedicles, intervertebral foramina, and articular processes. In the lumbar region they are in front of the articular processes, but behind the intervertebral foramina.

Lateral surfaces

The sides of the vertebral column are separated from the posterior surface by the articular processes in the cervical and thoracic regions and by the transverse processes in the lumbar region. In the thoracic region, the sides of the bodies of the vertebrae are marked in the back by the facets for articulation with the heads of the ribs. More posteriorly are the intervertebral foramina, formed by the juxtaposition of the vertebral notches, oval in shape, smallest in the cervical and upper part of the thoracic regions and gradually increasing in size to the last lumbar. They transmit the special spinal nerves and are situated between the transverse processes in the cervical region and in front of them, in the thoracic and lumbar regions.

Ligaments

There are different ligaments involved in the holding together of the vertebrae in the column, and in the column's movement. The anterior and posterior longitudinal ligaments extend the length of the vertebral column along the front and back of the vertebral bodies. The interspinous ligaments connect the adjoining spinous processes of the vertebrae. The supraspinous ligament extends the length of the spine running along the back of the spinous processes, from the sacrum to the seventh cervical vertebra. From there it is continuous with the nuchal ligament.

Development

The striking segmented pattern of the spine is established during embryogenesis when somites are rhythmically added to the posterior of the embryo. Somite formation begins around the third week when the embryo begins gastrulation and continues until all somites are formed. Their number varies between species: there are 42 to 44 somites in the human embryo and around 52 in the chick embryo. The somites are spheres, formed from the paraxial mesoderm that lies at the sides of the neural tube and they contain the precursors of spinal bone, the vertebrae ribs and some of the skull, as well as muscle, ligaments and skin. Somitogenesis and the subsequent distribution of somites is controlled by a clock and wavefront model acting in cells of the paraxial mesoderm. Soon after their formation, sclerotomes, which give rise to some of the bone of the skull, the vertebrae and ribs, migrate, leaving the remainder of the somite now termed a dermamyotome behind. This then splits to give the myotomes which will form the muscles and dermatomes which will form the skin of the back. Sclerotomes become subdivided into an anterior and a posterior compartment. This subdivision plays a key role in the definitive patterning of vertebrae that form when the posterior part of one somite fuses to the anterior part of the consecutive somite during a process termed resegmentation. Disruption of the somitogenesis process in humans results in diseases such as congenital scoliosis. So far, the human homologues of three genes associated to the mouse segmentation clock, (MESP2, DLL3 and LFNG), have been shown to be mutated in cases of congenital scoliosis, suggesting that the mechanisms involved in vertebral segmentation are conserved across vertebrates. In humans the first four somites are incorporated in the base of the occipital bone of the skull and the next 33 somites will form the vertebrae, ribs, muscles, ligaments and skin. The remaining posterior somites degenerate. During the fourth week of embryogenesis, the sclerotomes shift their position to surround the spinal cord and the notochord. This column of tissue has a segmented appearance, with alternating areas of dense and less dense areas.

As the sclerotome develops, it condenses further eventually developing into the vertebral body. Development of the appropriate shapes of the vertebral bodies is regulated by HOX genes.

The less dense tissue that separates the sclerotome segments develop into the intervertebral discs.

The notochord disappears in the sclerotome (vertebral body) segments but persists in the region of the intervertebral discs as the nucleus pulposus. The nucleus pulposus and the fibers of the anulus fibrosus make up the intervertebral disc.

The primary curves (thoracic and sacral curvatures) form during fetal development. The secondary curves develop after birth. The cervical curvature forms as a result of lifting the head and the lumbar curvature forms as a result of walking.

Function

Spinal cord

The spinal cord nested in the vertebral column.

The vertebral column surrounds the spinal cord which travels within the spinal canal, formed from a central hole within each vertebra. The spinal cord is part of the central nervous system that supplies nerves and receives information from the peripheral nervous system within the body. The spinal cord consists of grey and white matter and a central cavity, the central canal. Adjacent to each vertebra emerge spinal nerves. The spinal nerves provide sympathetic nervous supply to the body, with nerves emerging forming the sympathetic trunk and the splanchnic nerves.

The spinal canal follows the different curves of the column; it is large and triangular in those parts of the column that enjoy the greatest freedom of movement, such as the cervical and lumbar regions, and is small and rounded in the thoracic region, where motion is more limited. The spinal cord terminates in the conus medullaris and cauda equina.

Clinical significance

3D Medical Animation still shot of Spina Bifida
3D Medical Animation still shot of Spina Bifida

Disease

Spina bifida is a congenital disorder in which there is a defective closure of the vertebral arch. Sometimes the spinal meninges and also the spinal cord can protrude through this, and this is called spina bifida cystica. Where the condition does not involve this protrusion it is known as spina bifida occulta. Sometimes all of the vertebral arches may remain incomplete.

Another, though rare, congenital disease is Klippel–Feil syndrome, which is the fusion of any two of the cervical vertebrae.

Spondylolisthesis is the forward displacement of a vertebra and retrolisthesis is a posterior displacement of one vertebral body with respect to the adjacent vertebra to a degree less than a dislocation.

Spondylolysis, also known as a pars defect, is a defect or fracture at the pars interarticularis of the vertebral arch.

Spinal disc herniation, more commonly called a "slipped disc", is the result of a tear in the outer ring (anulus fibrosus) of the intervertebral disc, which lets some of the soft gel-like material, the nucleus pulposus, bulge out in a hernia.

Spinal stenosis is a narrowing of the spinal canal which can occur in any region of the spine though less commonly in the thoracic region. The stenosis can constrict the spinal canal giving rise to a neurological deficit.

Pain at the coccyx (tailbone) is known as coccydynia.

Spinal cord injury is damage to the spinal cord that causes changes in its function, either temporary or permanent. Spinal cord injuries can be divided into categories: complete transection, hemisection, central spinal cord lesions, posterior spinal cord lesions, and anterior spinal cord lesions.

Scalloping vertebrae is the increase in the concavity of the posterior vertebral body. It can be seen on lateral X-ray and sagittal views of CT and MRI scans. Its concavity is due to the increased pressure exerting on the vertebrae due to a mass. Internal spinal mass such as spinal astrocytoma, ependymoma, schwannoma, neurofibroma, and achondroplasia causes vertebrae scalloping.

Curvature

Diagram showing normal curvature of the vertebrae from childhood to teenage

Excessive or abnormal spinal curvature is classed as a spinal disease or dorsopathy and includes the following abnormal curvatures:

  • Kyphosis is an exaggerated kyphotic (convex) curvature of the thoracic region in the sagittal plane, also called hyperkyphosis. This produces the so-called "humpback" or "dowager's hump", a condition commonly resulting from osteoporosis.
  • Lordosis is an exaggerated lordotic (concave) curvature of the lumbar region in the sagittal plane, is known as lumbar hyperlordosis and also as "swayback". Temporary lordosis is common during pregnancy.
  • Scoliosis, lateral curvature, is the most common abnormal curvature, occurring in 0.5% of the population. It is more common among females and may result from unequal growth of the two sides of one or more vertebrae, so that they do not fuse properly. It can also be caused by pulmonary atelectasis (partial or complete deflation of one or more lobes of the lungs) as observed in asthma or pneumothorax.
  • Kyphoscoliosis, a combination of kyphosis and scoliosis.

Anatomical landmarks

Surface projections of organs of the torso. The transpyloric line is seen at L1

Individual vertebrae of the human vertebral column can be felt and used as surface anatomy, with reference points are taken from the middle of the vertebral body. This provides anatomical landmarks that can be used to guide procedures such as a lumbar puncture and also as vertical reference points to describe the locations of other parts of human anatomy, such as the positions of organs.

Other animals

Variations in vertebrae

The general structure of vertebrae in other animals is largely the same as in humans. Individual vertebrae are composed of a centrum (body), arches protruding from the top and bottom of the centrum, and various processes projecting from the centrum and/or arches. An arch extending from the top of the centrum is called a neural arch, while the haemal arch is found underneath the centrum in the caudal (tail) vertebrae of fish, most reptiles, some birds, some dinosaurs and some mammals with long tails. The vertebral processes can either give the structure rigidity, help them articulate with ribs, or serve as muscle attachment points. Common types are transverse process, diapophyses, parapophyses, and zygapophyses (both the cranial zygapophyses and the caudal zygapophyses). The centrum of the vertebra can be classified based on the fusion of its elements. In temnospondyls, bones such as the spinous process, the pleurocentrum and the intercentrum are separate ossifications. Fused elements, however, classify a vertebra as having holospondyly.

A vertebra can also be described in terms of the shape of the ends of the centrum. Centra with flat ends are acoelous, like those in mammals. These flat ends of the centra are especially good at supporting and distributing compressive forces. Amphicoelous vertebra have centra with both ends concave. This shape is common in fish, where most motion is limited. Amphicoelous centra often are integrated with a full notochord. Procoelous vertebrae are anteriorly concave and posteriorly convex. They are found in frogs and modern reptiles. Opisthocoelous vertebrae are the opposite, possessing anterior convexity and posterior concavity. They are found in salamanders, and in some non-avian dinosaurs. Heterocoelous vertebrae have saddle-shaped articular surfaces. This type of configuration is seen in turtles that retract their necks, and birds, because it permits extensive lateral and vertical flexion motion without stretching the nerve cord too extensively or wringing it about its long axis.

In horses, the Arabian (breed) can have one less vertebrae and pair of ribs. This anomaly disappears in foals that are the product of an Arabian and another breed of horse.

Regional vertebrae

Vertebrae are defined by their location in the vertebral column. Cervical vertebrae are those in the neck area. With the exception of the two sloth genera (Choloepus and Bradypus) and the manatee genus, (Trichechus), all mammals have seven cervical vertebrae. In other vertebrates, the number of cervical vertebrae can range from a single vertebra in amphibians to as many as 25 in swans or 76 in the extinct plesiosaur Elasmosaurus. The dorsal vertebrae range from the bottom of the neck to the top of the pelvis. Dorsal vertebrae attached to the ribs are called thoracic vertebrae, while those without ribs are called lumbar vertebrae. The sacral vertebrae are those in the pelvic region, and range from one in amphibians, to two in most birds and modern reptiles, or up to three to five in mammals. When multiple sacral vertebrae are fused into a single structure, it is called the sacrum. The synsacrum is a similar fused structure found in birds that is composed of the sacral, lumbar, and some of the thoracic and caudal vertebra, as well as the pelvic girdle. Caudal vertebrae compose the tail, and the final few can be fused into the pygostyle in birds, or into the coccygeal or tail bone in chimpanzees (and humans).

Fish and amphibians

A vertebra (diameter 5 mm) of a small ray-finned fish

The vertebrae of lobe-finned fishes consist of three discrete bony elements. The vertebral arch surrounds the spinal cord, and is of broadly similar form to that found in most other vertebrates. Just beneath the arch lies a small plate-like pleurocentrum, which protects the upper surface of the notochord, and below that, a larger arch-shaped intercentrum to protect the lower border. Both of these structures are embedded within a single cylindrical mass of cartilage. A similar arrangement was found in the primitive Labyrinthodonts, but in the evolutionary line that led to reptiles (and hence, also to mammals and birds), the intercentrum became partially or wholly replaced by an enlarged pleurocentrum, which in turn became the bony vertebral body. In most ray-finned fishes, including all teleosts, these two structures are fused with, and embedded within, a solid piece of bone superficially resembling the vertebral body of mammals. In living amphibians, there is simply a cylindrical piece of bone below the vertebral arch, with no trace of the separate elements present in the early tetrapods.

In cartilaginous fish, such as sharks, the vertebrae consist of two cartilaginous tubes. The upper tube is formed from the vertebral arches, but also includes additional cartilaginous structures filling in the gaps between the vertebrae, and so enclosing the spinal cord in an essentially continuous sheath. The lower tube surrounds the notochord, and has a complex structure, often including multiple layers of calcification.

Lampreys have vertebral arches, but nothing resembling the vertebral bodies found in all higher vertebrates. Even the arches are discontinuous, consisting of separate pieces of arch-shaped cartilage around the spinal cord in most parts of the body, changing to long strips of cartilage above and below in the tail region. Hagfishes lack a true vertebral column, and are therefore not properly considered vertebrates, but a few tiny neural arches are present in the tail.

Other vertebrates

The general structure of human vertebrae is fairly typical of that found in mammals, reptiles, and birds. The shape of the vertebral body does, however, vary somewhat between different groups. In mammals, such as humans, it typically has flat upper and lower surfaces, while in reptiles the anterior surface commonly has a concave socket into which the expanded convex face of the next vertebral body fits. Even these patterns are only generalisations, however, and there may be variation in form of the vertebrae along the length of the spine even within a single species. Some unusual variations include the saddle-shaped sockets between the cervical vertebrae of birds and the presence of a narrow hollow canal running down the centre of the vertebral bodies of geckos and tuataras, containing a remnant of the notochord.

Reptiles often retain the primitive intercentra, which are present as small crescent-shaped bony elements lying between the bodies of adjacent vertebrae; similar structures are often found in the caudal vertebrae of mammals. In the tail, these are attached to chevron-shaped bones called haemal arches, which attach below the base of the spine, and help to support the musculature. These latter bones are probably homologous with the ventral ribs of fish. The number of vertebrae in the spines of reptiles is highly variable, and may be several hundred in some species of snake.

In birds, there is a variable number of cervical vertebrae, which often form the only truly flexible part of the spine. The thoracic vertebrae are partially fused, providing a solid brace for the wings during flight. The sacral vertebrae are fused with the lumbar vertebrae, and some thoracic and caudal vertebrae, to form a single structure, the synsacrum, which is thus of greater relative length than the sacrum of mammals. In living birds, the remaining caudal vertebrae are fused into a further bone, the pygostyle, for attachment of the tail feathers.

Aside from the tail, the number of vertebrae in mammals is generally fairly constant. There are almost always seven cervical vertebrae (sloths and manatees are among the few exceptions), followed by around twenty or so further vertebrae, divided between the thoracic and lumbar forms, depending on the number of ribs. There are generally three to five vertebrae with the sacrum, and anything up to fifty caudal vertebrae.

Dinosaurs

The vertebral column in dinosaurs consists of the cervical (neck), dorsal (back), sacral (hips), and caudal (tail) vertebrae. Saurischian dinosaur vertebrae sometimes possess features known as pleurocoels, which are hollow depressions on the lateral portions of the vertebrae, perforated to create an entrance into the air chambers within the vertebrae, which served to decrease the weight of these bones without sacrificing strength. These pleurocoels were filled with air sacs, which would have further decreased weight. In sauropod dinosaurs, the largest known land vertebrates, pleurocoels and air sacs may have reduced the animal's weight by over a ton in some instances, a handy evolutionary adaption in animals that grew to over 30 metres in length. In many hadrosaur and theropod dinosaurs, the caudal vertebrae were reinforced by ossified tendons. The presence of three or more sacral vertebrae, in association with the hip bones, is one of the defining characteristics of dinosaurs. The occipital condyle is a structure on the posterior part of a dinosaur's skull that articulates with the first cervical vertebra.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...