Search This Blog

Sunday, July 3, 2022

Formic acid

From Wikipedia, the free encyclopedia

Formic acid
Skeletal structure of formic acid
3D model of formic acid
Formic acid 85 percent.jpg
Names
Preferred IUPAC name
Formic acid
Systematic IUPAC name
Methanoic acid
Other names
Carbonous acid; Formylic acid; Hydrogen carboxylic acid; Hydroxy(oxo)methane; Metacarbonoic acid; Oxocarbinic acid; Oxomethanol
Identifiers
3D model (JSmol)
1209246
ChEBI
ChEMBL
ChemSpider
DrugBank
ECHA InfoCard 100.000.527 Edit this at Wikidata
EC Number
  • 200-579-1
E number E236 (preservatives)
1008
KEGG
RTECS number
  • LQ4900000
UNII


Properties
CH2O2
Molar mass 46.025 g·mol−1
Appearance Colorless fuming liquid
Odor Pungent, penetrating
Density 1.220 g/mL
Melting point 8.4 °C (47.1 °F; 281.5 K)
Boiling point 100.8 °C (213.4 °F; 373.9 K)
Miscible
Solubility Miscible with ether, acetone, ethyl acetate, glycerol, methanol, ethanol
Partially soluble in benzene, toluene, xylenes
log P −0.54
Vapor pressure 35 mmHg (20 °C)
Acidity (pKa) 3.745
Conjugate base Formate
−19.90·10−6 cm3/mol
1.3714 (20 °C)
Viscosity 1.57 cP at 268 °C
Structure
Planar
1.41 D (gas)
Thermochemistry
131.8 J/mol K
−425.0 kJ/mol
−254.6 kJ/mol
Pharmacology
QP53AG01 (WHO)
Hazards
Occupational safety and health (OHS/OSH):
Main hazards
Corrosive; irritant;
sensitizer
GHS labelling:
GHS05: Corrosive
Danger
H314
P260, P264, P280, P301+P330+P331, P303+P361+P353, P304+P340, P305+P351+P338, P310, P321, P363, P405, P501
NFPA 704 (fire diamond)
3
2
0
Flash point 69 °C (156 °F; 342 K)
601 °C (1,114 °F; 874 K)
Explosive limits 14–34%
18–57% (90% solution)
Lethal dose or concentration (LD, LC):
700 mg/kg (mouse, oral), 1100 mg/kg (rat, oral), 4000 mg/kg (dog, oral)
7853 ppm (rat, 15 min)
3246 ppm (mouse, 15 min)
NIOSH (US health exposure limits):
PEL (Permissible)
TWA 5 ppm (9 mg/m3)
REL (Recommended)
TWA 5 ppm (9 mg/m3)
IDLH (Immediate danger)
30 ppm
Safety data sheet (SDS) MSDS from JT Baker
Related compounds
Acetic acid
Propionic acid
Related compounds
Formaldehyde
Methanol
Supplementary data page
Formic acid (data page)
Except where otherwise noted, data are given for materials in their standard state (at 25 °C [77 °F], 100 kPa).

Formic acid, systematically named methanoic acid, is the simplest carboxylic acid, and has the chemical formula HCOOH. It is an important intermediate in chemical synthesis and occurs naturally, most notably in some ants. The word "formic" comes from the Latin word for ant, formica, referring to its early isolation by the distillation of ant bodies. Esters, salts, and the anion derived from formic acid are called formates. Industrially, formic acid is produced from methanol.

Properties

Cyclic dimer of formic acid; dashed green lines represent hydrogen bonds

Formic acid is a colorless liquid having a pungent, penetrating odor at room temperature, comparable to the related acetic acid. It is miscible with water and most polar organic solvents, and is somewhat soluble in hydrocarbons. In hydrocarbons and in the vapor phase, it consists of hydrogen-bonded dimers rather than individual molecules. Owing to its tendency to hydrogen-bond, gaseous formic acid does not obey the ideal gas law. Solid formic acid, which can exist in either of two polymorphs, consists of an effectively endless network of hydrogen-bonded formic acid molecules. Formic acid forms a high-boiling azeotrope with water (22.4%). Liquid formic acid tends to supercool.

Natural occurrence

In nature, formic acid is found in most ants and in stingless bees of the genus Oxytrigona. The wood ants from the genus Formica can spray formic acid on their prey or to defend the nest. The puss moth caterpillar (Cerura vinula) will spray it as well when threatened by predators. It is also found in the trichomes of stinging nettle (Urtica dioica). Formic acid is a naturally occurring component of the atmosphere primarily due to forest emissions.

Production

In 2009, the worldwide capacity for producing formic acid was 720 thousand tonnes (1.6 billion pounds) per year, roughly equally divided between Europe (350 thousand tonnes or 770 million pounds, mainly in Germany) and Asia (370 thousand tonnes or 820 million pounds, mainly in China) while production was below 1 thousand tonnes or 2.2 million pounds per year in all other continents. It is commercially available in solutions of various concentrations between 85 and 99 w/w %. As of 2009, the largest producers are BASF, Eastman Chemical Company, LC Industrial, and Feicheng Acid Chemicals, with the largest production facilities in Ludwigshafen (200 thousand tonnes or 440 million pounds per year, BASF, Germany), Oulu (105 thousand tonnes or 230 million pounds, Eastman, Finland), Nakhon Pathom (n/a, LC Industrial), and Feicheng (100 thousand tonnes or 220 million pounds, Feicheng, China). 2010 prices ranged from around €650/tonne (equivalent to around $800/tonne) in Western Europe to $1250/tonne in the United States.

From methyl formate and formamide

When methanol and carbon monoxide are combined in the presence of a strong base, the result is methyl formate, according to the chemical equation:

CH3OH + CO → HCO2CH3

In industry, this reaction is performed in the liquid phase at elevated pressure. Typical reaction conditions are 80 °C and 40 atm. The most widely used base is sodium methoxide. Hydrolysis of the methyl formate produces formic acid:

HCO2CH3 + H2O → HCOOH + CH3OH

Efficient hydrolysis of methyl formate requires a large excess of water. Some routes proceed indirectly by first treating the methyl formate with ammonia to give formamide, which is then hydrolyzed with sulfuric acid:

HCO2CH3 + NH3 → HC(O)NH2 + CH3OH
2 HC(O)NH2 + 2H2O + H2SO4 → 2HCO2H + (NH4)2SO4

A disadvantage of this approach is the need to dispose of the ammonium sulfate byproduct. This problem has led some manufacturers to develop energy-efficient methods of separating formic acid from the excess water used in direct hydrolysis. In one of these processes, used by BASF, the formic acid is removed from the water by liquid-liquid extraction with an organic base.

Niche and obsolete chemical routes

By-product of acetic acid production

A significant amount of formic acid is produced as a byproduct in the manufacture of other chemicals. At one time, acetic acid was produced on a large scale by oxidation of alkanes, by a process that cogenerates significant formic acid. This oxidative route to acetic acid has declined in importance so that the aforementioned dedicated routes to formic acid have become more important.

Hydrogenation of carbon dioxide

The catalytic hydrogenation of CO2 to formic acid has long been studied. This reaction can be conducted homogeneously.

Oxidation of biomass

Formic acid can also be obtained by aqueous catalytic partial oxidation of wet biomass by the OxFA process. A Keggin-type polyoxometalate (H5PV2Mo10O40) is used as the homogeneous catalyst to convert sugars, wood, waste paper, or cyanobacteria to formic acid and CO2 as the sole byproduct. Yields of up to 53% formic acid can be achieved.

Laboratory methods

In the laboratory, formic acid can be obtained by heating oxalic acid in glycerol and extraction by steam distillation. Glycerol acts as a catalyst, as the reaction proceeds through a glyceryl oxalate intermediate. If the reaction mixture is heated to higher temperatures, allyl alcohol results. The net reaction is thus:

C2O4H2 → HCO2H + CO2

Another illustrative method involves the reaction between lead formate and hydrogen sulfide, driven by the formation of lead sulfide.

Pb(HCOO)2 + H2S → 2HCOOH + PbS

Electrochemical production

It has been reported that formate can be formed by the electrochemical reduction of CO2 (in the form of bicarbonate) at a lead cathode at pH 8.6:

HCO
3
+ H
2
O
+ 2eHCO
2
+ 2OH

or

CO
2
+ H
2
O
+ 2eHCO
2
+ OH

If the feed is CO
2
and oxygen is evolved at the anode, the total reaction is:

CO2 + OH
HCO
2
+ 1/2 O2

This has been proposed as a large-scale source of formate by various groups. The formate could be used as feed to modified E. coli bacteria for producing biomass. Natural microbes do exist that can feed on formic acid or formate (see Methylotroph).

Biosynthesis

Formic acid is named after ants which have high concentrations of the compound in their venom. In ants, formic acid is derived from serine through a 5,10-methenyltetrahydrofolate intermediate. The conjugate base of formic acid, formate, also occurs widely in nature. An assay for formic acid in body fluids, designed for determination of formate after methanol poisoning, is based on the reaction of formate with bacterial formate dehydrogenase.

Artificial photosynthesis

In August 2020 researchers at Cambridge University announced a stand-alone advanced 'photosheet' technology that converts sunlight, carbon dioxide and water into oxygen and formic acid with no other inputs.

Uses

A major use of formic acid is as a preservative and antibacterial agent in livestock feed. In Europe, it is applied on silage, including fresh hay, to promote the fermentation of lactic acid and to suppress the formation of butyric acid; it also allows fermentation to occur quickly, and at a lower temperature, reducing the loss of nutritional value. Formic acid arrests certain decay processes and causes the feed to retain its nutritive value longer, and so it is widely used to preserve winter feed for cattle. In the poultry industry, it is sometimes added to feed to kill E. coli bacteria. Use as a preservative for silage and (other) animal feed constituted 30% of the global consumption in 2009.

Formic acid is also significantly used in the production of leather, including tanning (23% of the global consumption in 2009), and in dyeing and finishing textiles (9% of the global consumption in 2009) because of its acidic nature. Use as a coagulant in the production of rubber consumed 6% of the global production in 2009.

Formic acid is also used in place of mineral acids for various cleaning products, such as limescale remover and toilet bowl cleaner. Some formate esters are artificial flavorings and perfumes.

Beekeepers use formic acid as a miticide against the tracheal mite (Acarapis woodi) and the Varroa destructor mite and Varroa jacobsoni mite.

Formic acid application has been reported to be an effective treatment for warts.

Formic acid can be used in a fuel cell (it can be used directly in formic acid fuel cells and indirectly in hydrogen fuel cells).

It is possible to use formic acid as an intermediary to produce isobutanol from CO2 using microbes.

Formic acid has a potential application in soldering, due to its capacity to reduce oxide layers, formic acid gas can be blasted at an oxide surface in order to increase solder wettability.

Formic acid is often used as a component of mobile phase in reversed-phase high-performance liquid chromatography (RP-HPLC) analysis and separation techniques for the separation of hydrophobic macromolecules, such as peptides, proteins and more complex structures including intact viruses. Especially when paired with mass spectrometry detection, formic acid offers several advantages over the more traditionally used phosphoric acid.

Chemical reactions

Formic acid is about ten times stronger than acetic acid. It is used as a volatile pH modifier in HPLC and capillary electrophoresis.

Formic acid is a source for a formyl group for example in the formylation of methylaniline to N-methylformanilide in toluene.

In synthetic organic chemistry, formic acid is often used as a source of hydride ion. The Eschweiler-Clarke reaction and the Leuckart-Wallach reaction are examples of this application. It, or more commonly its azeotrope with triethylamine, is also used as a source of hydrogen in transfer hydrogenation.

The Eschweiler–Clark reaction

As mentioned below, formic acid readily decomposes with concentrated sulfuric acid to form carbon monoxide.

HCO2H + H2SO4 → H2SO4 + H2O + CO

Reactions

Formic acid shares most of the chemical properties of other carboxylic acids. Because of its high acidity, solutions in alcohols form esters spontaneously. Formic acid shares some of the reducing properties of aldehydes, reducing solutions of metal oxides to their respective metal.

Decomposition

Heat and especially acids cause formic acid to decompose to carbon monoxide (CO) and water (dehydration). Treatment of formic acid with sulfuric acid is a convenient laboratory source of CO.

In the presence of platinum, it decomposes with a release of hydrogen and carbon dioxide.

HCO2H → H2 + CO2

Soluble ruthenium catalysts are also effective. Carbon monoxide free hydrogen has been generated in a very wide pressure range (1–600 bar). Formic acid has been considered as a means of hydrogen storage. The co-product of this decomposition, carbon dioxide, can be rehydrogenated back to formic acid in a second step. Formic acid contains 53 g/L hydrogen at room temperature and atmospheric pressure, which is three and a half times as much as compressed hydrogen gas can attain at 350 bar pressure (14.7 g/L). Pure formic acid is a liquid with a flash point of +69 °C, much higher than that of gasoline (−40 °C) or ethanol (+13 °C).

Addition to alkenes

Formic acid is unique among the carboxylic acids in its ability to participate in addition reactions with alkenes. Formic acids and alkenes readily react to form formate esters. In the presence of certain acids, including sulfuric and hydrofluoric acids, however, a variant of the Koch reaction occurs instead, and formic acid adds to the alkene to produce a larger carboxylic acid.

Formic acid anhydride

An unstable formic anhydride, H(C=O)−O−(C=O)H, can be obtained by dehydration of formic acid with N,N′-dicyclohexylcarbodiimide in ether at low temperature.

History

Some alchemists and naturalists were aware that ant hills give off an acidic vapor as early as the 15th century. The first person to describe the isolation of this substance (by the distillation of large numbers of ants) was the English naturalist John Ray, in 1671. Ants secrete the formic acid for attack and defense purposes. Formic acid was first synthesized from hydrocyanic acid by the French chemist Joseph Gay-Lussac. In 1855, another French chemist, Marcellin Berthelot, developed a synthesis from carbon monoxide similar to the process used today.

Formic acid was long considered a chemical compound of only minor interest in the chemical industry. In the late 1960s, however, significant quantities became available as a byproduct of acetic acid production. It now finds increasing use as a preservative and antibacterial in livestock feed.

Safety

Formic acid has low toxicity (hence its use as a food additive), with an LD50 of 1.8 g/kg (tested orally on mice). The concentrated acid is corrosive to the skin.

Formic acid is readily metabolized and eliminated by the body. Nonetheless, it has specific toxic effects; the formic acid and formaldehyde produced as metabolites of methanol are responsible for the optic nerve damage, causing blindness, seen in methanol poisoning. Some chronic effects of formic acid exposure have been documented. Some experiments on bacterial species have demonstrated it to be a mutagen. Chronic exposure in humans may cause kidney damage. Another possible effect of chronic exposure is development of a skin allergy that manifests upon re-exposure to the chemical.

Concentrated formic acid slowly decomposes to carbon monoxide and water, leading to pressure buildup in the containing vessel. For this reason, 98% formic acid is shipped in plastic bottles with self-venting caps.

The hazards of solutions of formic acid depend on the concentration. The following table lists the Globally Harmonized System of Classification and Labelling of Chemicals for formic acid solutions:

Concentration (weight percent) Pictogram H-Phrases
2–10% GHS07: Exclamation mark H315
10–90% GHS05: Corrosive H313
>90% GHS05: Corrosive H314

Formic acid in 85% concentration is flammable, and diluted formic acid is on the U.S. Food and Drug Administration list of food additives. The principal danger from formic acid is from skin or eye contact with the concentrated liquid or vapors. The U.S. OSHA Permissible Exposure Level (PEL) of formic acid vapor in the work environment is 5 parts per million parts of air (ppm).

Mathematical physics

From Wikipedia, the free encyclopedia
 
An example of mathematical physics: solutions of Schrödinger's equation for quantum harmonic oscillators (left) with their amplitudes (right).

Mathematical physics refers to the development of mathematical methods for application to problems in physics. The Journal of Mathematical Physics defines the field as "the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories". An alternative definition would also include those mathematics that are inspired by physics (also known as physical mathematics).

Scope

There are several distinct branches of mathematical physics, and these roughly correspond to particular historical periods.

Classical mechanics

The rigorous, abstract and advanced reformulation of Newtonian mechanics adopting the Lagrangian mechanics and the Hamiltonian mechanics even in the presence of constraints. Both formulations are embodied in analytical mechanics and lead to understanding the deep interplay of the notions of symmetry and conserved quantities during the dynamical evolution, as embodied within the most elementary formulation of Noether's theorem. These approaches and ideas have been extended to other areas of physics as statistical mechanics, continuum mechanics, classical field theory and quantum field theory. Moreover, they have provided several examples and ideas in differential geometry (e.g. several notions in symplectic geometry and vector bundle).

Partial differential equations

Following mathematics: the theory of partial differential equation, variational calculus, Fourier analysis, potential theory, and vector analysis are perhaps most closely associated with mathematical physics. These were developed intensively from the second half of the 18th century (by, for example, D'Alembert, Euler, and Lagrange) until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, electricity, magnetism, and aerodynamics.

Quantum theory

The theory of atomic spectra (and, later, quantum mechanics) developed almost concurrently with some parts of the mathematical fields of linear algebra, the spectral theory of operators, operator algebras and more broadly, functional analysis. Nonrelativistic quantum mechanics includes Schrödinger operators, and it has connections to atomic and molecular physics. Quantum information theory is another subspecialty.

Relativity and quantum relativistic theories

The special and general theories of relativity require a rather different type of mathematics. This was group theory, which played an important role in both quantum field theory and differential geometry. This was, however, gradually supplemented by topology and functional analysis in the mathematical description of cosmological as well as quantum field theory phenomena. In the mathematical description of these physical areas, some concepts in homological algebra and category theory are also important.

Statistical mechanics

Statistical mechanics forms a separate field, which includes the theory of phase transitions. It relies upon the Hamiltonian mechanics (or its quantum version) and it is closely related with the more mathematical ergodic theory and some parts of probability theory. There are increasing interactions between combinatorics and physics, in particular statistical physics.

Usage

Relationship between mathematics and physics

The usage of the term "mathematical physics" is sometimes idiosyncratic. Certain parts of mathematics that initially arose from the development of physics are not, in fact, considered parts of mathematical physics, while other closely related fields are. For example, ordinary differential equations and symplectic geometry are generally viewed as purely mathematical disciplines, whereas dynamical systems and Hamiltonian mechanics belong to mathematical physics. John Herapath used the term for the title of his 1847 text on "mathematical principles of natural philosophy"; the scope at that time being

"the causes of heat, gaseous elasticity, gravitation, and other great phenomena of nature".

Mathematical vs. theoretical physics

The term "mathematical physics" is sometimes used to denote research aimed at studying and solving problems in physics or thought experiments within a mathematically rigorous framework. In this sense, mathematical physics covers a very broad academic realm distinguished only by the blending of some mathematical aspect and physics theoretical aspect. Although related to theoretical physics, mathematical physics in this sense emphasizes the mathematical rigour of the similar type as found in mathematics.

On the other hand, theoretical physics emphasizes the links to observations and experimental physics, which often requires theoretical physicists (and mathematical physicists in the more general sense) to use heuristic, intuitive, and approximate arguments. Such arguments are not considered rigorous by mathematicians.

Such mathematical physicists primarily expand and elucidate physical theories. Because of the required level of mathematical rigour, these researchers often deal with questions that theoretical physicists have considered to be already solved. However, they can sometimes show that the previous solution was incomplete, incorrect, or simply too naïve. Issues about attempts to infer the second law of thermodynamics from statistical mechanics are examples. Other examples concern the subtleties involved with synchronisation procedures in special and general relativity (Sagnac effect and Einstein synchronisation).

The effort to put physical theories on a mathematically rigorous footing not only developed physics but also has influenced developments of some mathematical areas. For example, the development of quantum mechanics and some aspects of functional analysis parallel each other in many ways. The mathematical study of quantum mechanics, quantum field theory, and quantum statistical mechanics has motivated results in operator algebras. The attempt to construct a rigorous mathematical formulation of quantum field theory has also brought about some progress in fields such as representation theory.

Prominent mathematical physicists

Before Newton

There is a tradition of mathematical analysis of nature that goes back to the ancient Greeks; examples include Euclid (Optics), Archimedes (On the Equilibrium of Planes, On Floating Bodies), and Ptolemy (Optics, Harmonics). Later, Islamic and Byzantine scholars built on these works, and these ultimately were reintroduced or became available to the West in the 12th century and during the Renaissance.

In the first decade of the 16th century, amateur astronomer Nicolaus Copernicus proposed heliocentrism, and published a treatise on it in 1543. He retained the Ptolemaic idea of epicycles, and merely sought to simplify astronomy by constructing simpler sets of epicyclic orbits. Epicycles consist of circles upon circles. According to Aristotelian physics, the circle was the perfect form of motion, and was the intrinsic motion of Aristotle's fifth element—the quintessence or universal essence known in Greek as aether for the English pure air—that was the pure substance beyond the sublunary sphere, and thus was celestial entities' pure composition. The German Johannes Kepler [1571–1630], Tycho Brahe's assistant, modified Copernican orbits to ellipses, formalized in the equations of Kepler's laws of planetary motion.

An enthusiastic atomist, Galileo Galilei in his 1623 book The Assayer asserted that the "book of nature is written in mathematics". His 1632 book, about his telescopic observations, supported heliocentrism. Having introduced experimentation, Galileo then refuted geocentric cosmology by refuting Aristotelian physics itself. Galileo's 1638 book Discourse on Two New Sciences established the law of equal free fall as well as the principles of inertial motion, founding the central concepts of what would become today's classical mechanics. By the Galilean law of inertia as well as the principle of Galilean invariance, also called Galilean relativity, for any object experiencing inertia, there is empirical justification for knowing only that it is at relative rest or relative motion—rest or motion with respect to another object.

René Descartes famously developed a complete system of heliocentric cosmology anchored on the principle of vortex motion, Cartesian physics, whose widespread acceptance brought the demise of Aristotelian physics. Descartes sought to formalize mathematical reasoning in science, and developed Cartesian coordinates for geometrically plotting locations in 3D space and marking their progressions along the flow of time.

An older contemporary of Newton, Christiaan Huygens, was the first to idealize a physical problem by a set of parameters and the first to fully mathematize a mechanistic explanation of unobservable physical phenomena, and for these reasons Huygens is considered the first theoretical physicist and one of the founders of modern mathematical physics.

Newtonian and post Newtonian

In this era, important concepts in calculus such as the fundamental theorem of calculus (proved in 1668 by Scottish mathematician James Gregory) and finding extrema and minima of functions via differentiation using Fermat's theorem (by French mathematician Pierre de Fermat) were already known before Leibniz and Newton. Isaac Newton (1642–1727) developed some concepts in calculus (although Gottfried Wilhelm Leibniz developed similar concepts outside the context of physics) and Newton's method to solve problems in physics. He was extremely successful in his application of calculus to the theory of motion. Newton's theory of motion, shown in his Mathematical Principles of Natural Philosophy, published in 1687, modeled three Galilean laws of motion along with Newton's law of universal gravitation on a framework of absolute space—hypothesized by Newton as a physically real entity of Euclidean geometric structure extending infinitely in all directions—while presuming absolute time, supposedly justifying knowledge of absolute motion, the object's motion with respect to absolute space. The principle of Galilean invariance/relativity was merely implicit in Newton's theory of motion. Having ostensibly reduced the Keplerian celestial laws of motion as well as Galilean terrestrial laws of motion to a unifying force, Newton achieved great mathematical rigor, but with theoretical laxity.

In the 18th century, the Swiss Daniel Bernoulli (1700–1782) made contributions to fluid dynamics, and vibrating strings. The Swiss Leonhard Euler (1707–1783) did special work in variational calculus, dynamics, fluid dynamics, and other areas. Also notable was the Italian-born Frenchman, Joseph-Louis Lagrange (1736–1813) for work in analytical mechanics: he formulated Lagrangian mechanics) and variational methods. A major contribution to the formulation of Analytical Dynamics called Hamiltonian dynamics was also made by the Irish physicist, astronomer and mathematician, William Rowan Hamilton (1805-1865). Hamiltonian dynamics had played an important role in the formulation of modern theories in physics, including field theory and quantum mechanics. The French mathematical physicist Joseph Fourier (1768 – 1830) introduced the notion of Fourier series to solve the heat equation, giving rise to a new approach to solving partial differential equations by means of integral transforms.

Into the early 19th century, following mathematicians in France, Germany and England had contributed to mathematical physics. The French Pierre-Simon Laplace (1749–1827) made paramount contributions to mathematical astronomy, potential theory. Siméon Denis Poisson (1781–1840) worked in analytical mechanics and potential theory. In Germany, Carl Friedrich Gauss (1777–1855) made key contributions to the theoretical foundations of electricity, magnetism, mechanics, and fluid dynamics. In England, George Green (1793-1841) published An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism in 1828, which in addition to its significant contributions to mathematics made early progress towards laying down the mathematical foundations of electricity and magnetism.

A couple of decades ahead of Newton's publication of a particle theory of light, the Dutch Christiaan Huygens (1629–1695) developed the wave theory of light, published in 1690. By 1804, Thomas Young's double-slit experiment revealed an interference pattern, as though light were a wave, and thus Huygens's wave theory of light, as well as Huygens's inference that light waves were vibrations of the luminiferous aether, was accepted. Jean-Augustin Fresnel modeled hypothetical behavior of the aether. The English physicist Michael Faraday introduced the theoretical concept of a field—not action at a distance. Mid-19th century, the Scottish James Clerk Maxwell (1831–1879) reduced electricity and magnetism to Maxwell's electromagnetic field theory, whittled down by others to the four Maxwell's equations. Initially, optics was found consequent of[clarification needed] Maxwell's field. Later, radiation and then today's known electromagnetic spectrum were found also consequent of this electromagnetic field.

The English physicist Lord Rayleigh [1842–1919] worked on sound. The Irishmen William Rowan Hamilton (1805–1865), George Gabriel Stokes (1819–1903) and Lord Kelvin (1824–1907) produced several major works: Stokes was a leader in optics and fluid dynamics; Kelvin made substantial discoveries in thermodynamics; Hamilton did notable work on analytical mechanics, discovering a new and powerful approach nowadays known as Hamiltonian mechanics. Very relevant contributions to this approach are due to his German colleague mathematician Carl Gustav Jacobi (1804–1851) in particular referring to canonical transformations. The German Hermann von Helmholtz (1821–1894) made substantial contributions in the fields of electromagnetism, waves, fluids, and sound. In the United States, the pioneering work of Josiah Willard Gibbs (1839–1903) became the basis for statistical mechanics. Fundamental theoretical results in this area were achieved by the German Ludwig Boltzmann (1844-1906). Together, these individuals laid the foundations of electromagnetic theory, fluid dynamics, and statistical mechanics.

Relativistic

By the 1880s, there was a prominent paradox that an observer within Maxwell's electromagnetic field measured it at approximately constant speed, regardless of the observer's speed relative to other objects within the electromagnetic field. Thus, although the observer's speed was continually lost relative to the electromagnetic field, it was preserved relative to other objects in the electromagnetic field. And yet no violation of Galilean invariance within physical interactions among objects was detected. As Maxwell's electromagnetic field was modeled as oscillations of the aether, physicists inferred that motion within the aether resulted in aether drift, shifting the electromagnetic field, explaining the observer's missing speed relative to it. The Galilean transformation had been the mathematical process used to translate the positions in one reference frame to predictions of positions in another reference frame, all plotted on Cartesian coordinates, but this process was replaced by Lorentz transformation, modeled by the Dutch Hendrik Lorentz [1853–1928].

In 1887, experimentalists Michelson and Morley failed to detect aether drift, however. It was hypothesized that motion into the aether prompted aether's shortening, too, as modeled in the Lorentz contraction. It was hypothesized that the aether thus kept Maxwell's electromagnetic field aligned with the principle of Galilean invariance across all inertial frames of reference, while Newton's theory of motion was spared.

Austrian theoretical physicist and philosopher Ernst Mach criticized Newton's postulated absolute space. Mathematician Jules-Henri Poincaré (1854–1912) questioned even absolute time. In 1905, Pierre Duhem published a devastating criticism of the foundation of Newton's theory of motion. Also in 1905, Albert Einstein (1879–1955) published his special theory of relativity, newly explaining both the electromagnetic field's invariance and Galilean invariance by discarding all hypotheses concerning aether, including the existence of aether itself. Refuting the framework of Newton's theory—absolute space and absolute time—special relativity refers to relative space and relative time, whereby length contracts and time dilates along the travel pathway of an object.

In 1908, Einstein's former mathematics professor Hermann Minkowski modeled 3D space together with the 1D axis of time by treating the temporal axis like a fourth spatial dimension—altogether 4D spacetime—and declared the imminent demise of the separation of space and time. Einstein initially called this "superfluous learnedness", but later used Minkowski spacetime with great elegance in his general theory of relativity, extending invariance to all reference frames—whether perceived as inertial or as accelerated—and credited this to Minkowski, by then deceased. General relativity replaces Cartesian coordinates with Gaussian coordinates, and replaces Newton's claimed empty yet Euclidean space traversed instantly by Newton's vector of hypothetical gravitational force—an instant action at a distance—with a gravitational field. The gravitational field is Minkowski spacetime itself, the 4D topology of Einstein aether modeled on a Lorentzian manifold that "curves" geometrically, according to the Riemann curvature tensor. The concept of Newton's gravity: "two masses attract each other" replaced by the geometrical argument: "mass transform curvatures of spacetime and free falling particles with mass move along a geodesic curve in the spacetime" (Riemannian geometry already existed before the 1850s, by mathematicians Carl Friedrich Gauss and Bernhard Riemann in search for intrinsic geometry and non-Euclidean geometry.), in the vicinity of either mass or energy. (Under special relativity—a special case of general relativity—even massless energy exerts gravitational effect by its mass equivalence locally "curving" the geometry of the four, unified dimensions of space and time.)

Quantum

Another revolutionary development of the 20th century was quantum theory, which emerged from the seminal contributions of Max Planck (1856–1947) (on black-body radiation) and Einstein's work on the photoelectric effect. In 1912, a mathematician Henri Poincare published Sur la théorie des quanta. He introduced the first non-naïve definition of quantization in this paper. The development of early quantum physics followed by a heuristic framework devised by Arnold Sommerfeld (1868–1951) and Niels Bohr (1885–1962), but this was soon replaced by the quantum mechanics developed by Max Born (1882–1970), Werner Heisenberg (1901–1976), Paul Dirac (1902–1984), Erwin Schrödinger (1887–1961), Satyendra Nath Bose (1894–1974), and Wolfgang Pauli (1900–1958). This revolutionary theoretical framework is based on a probabilistic interpretation of states, and evolution and measurements in terms of self-adjoint operators on an infinite-dimensional vector space. That is called Hilbert space (introduced by mathematicians David Hilbert (1862–1943), Erhard Schmidt(1876-1959) and Frigyes Riesz (1880-1956) in search of generalization of Euclidean space and study of integral equations), and rigorously defined within the axiomatic modern version by John von Neumann in his celebrated book Mathematical Foundations of Quantum Mechanics, where he built up a relevant part of modern functional analysis on Hilbert spaces, the spectral theory (introduced by David Hilbert who investigated quadratic forms with infinitely many variables. Many years later, it had been revealed that his spectral theory is associated with the spectrum of the hydrogen atom. He was surprised by this application.) in particular. Paul Dirac used algebraic constructions to produce a relativistic model for the electron, predicting its magnetic moment and the existence of its antiparticle, the positron.

List of prominent contributors to mathematical physics in the 20th century

Prominent contributors to the 20th century's mathematical physics include, (ordered by birth date) William Thomson (Lord Kelvin) [1824–1907], Oliver Heaviside [1850–1925], Jules Henri Poincaré [1854–1912] , David Hilbert [1862–1943], Arnold Sommerfeld [1868–1951], Constantin Carathéodory [1873–1950], Albert Einstein [1879–1955], Max Born [1882–1970], George David Birkhoff [1884-1944], Hermann Weyl [1885–1955], Satyendra Nath Bose [1894-1974], Norbert Wiener [1894–1964], John Lighton Synge [1897–1995], Wolfgang Pauli [1900–1958], Paul Dirac [1902–1984], Eugene Wigner [1902–1995], Andrey Kolmogorov [1903-1987], Lars Onsager [1903-1976], John von Neumann [1903–1957], Sin-Itiro Tomonaga [1906–1979], Hideki Yukawa [1907–1981], Nikolay Nikolayevich Bogolyubov [1909–1992], Subrahmanyan Chandrasekhar [1910-1995], Mark Kac [1914–1984], Julian Schwinger [1918–1994], Richard Phillips Feynman [1918–1988], Irving Ezra Segal [1918–1998], Ryogo Kubo [1920–1995], Arthur Strong Wightman [1922–2013], Chen-Ning Yang [1922– ], Rudolf Haag [1922–2016], Freeman John Dyson [1923–2020], Martin Gutzwiller [1925–2014], Abdus Salam [1926–1996], Jürgen Moser [1928–1999], Michael Francis Atiyah [1929–2019], Joel Louis Lebowitz [1930– ], Roger Penrose [1931– ], Elliott Hershel Lieb [1932– ], Sheldon Glashow [1932– ], Steven Weinberg [1933–2021], Ludvig Dmitrievich Faddeev [1934–2017], David Ruelle [1935– ], Yakov Grigorevich Sinai [1935– ], Vladimir Igorevich Arnold [1937–2010], Arthur Michael Jaffe [1937–], Roman Wladimir Jackiw [1939– ], Leonard Susskind [1940– ], Rodney James Baxter [1940– ], Michael Victor Berry [1941- ], Giovanni Gallavotti [1941- ], Stephen William Hawking [1942–2018], Jerrold Eldon Marsden [1942–2010], Michael C. Reed [1942– ], Israel Michael Sigal [1945], Alexander Markovich Polyakov [1945– ], Barry Simon [1946– ], Herbert Spohn [1946– ], John Lawrence Cardy [1947– ], Giorgio Parisi [1948– ], Edward Witten [1951– ], Ashoke Sen [1956-] and Juan Martín Maldacena [1968– ].

Stanford–Binet Intelligence Scales

From Wikipedia, the free encyclopedia
 
Stanford–Binet Intelligence scales
ICD-9-CM94.01

The Stanford–Binet Intelligence Scales (or more commonly the Stanford–Binet) is an individually administered intelligence test that was revised from the original Binet–Simon Scale by Alfred Binet and Theodore Simon. The Stanford–Binet Intelligence Scale is now in its fifth edition (SB5) and was released in 2003. It is a cognitive ability and intelligence test that is used to diagnose developmental or intellectual deficiencies in young children. The test measures five weighted factors and consists of both verbal and nonverbal subtests. The five factors being tested are knowledge, quantitative reasoning, visual-spatial processing, working memory, and fluid reasoning.

The development of the Stanford–Binet initiated the modern field of intelligence testing and was one of the first examples of an adaptive test. The test originated in France, then was revised in the United States. It was initially created by the French psychologist Alfred Binet, who, following the introduction of a law mandating universal education by the French government, began developing a method of identifying "slow" children, so that they could be placed in special education programs, instead of labelled sick and sent to the asylum. As Binet indicated, case studies might be more detailed and helpful, but the time required to test many people would be excessive. In 1916, at Stanford University, the psychologist Lewis Terman released a revised examination that became known as the Stanford–Binet test.

Development

As discussed by Fancher & Rutherford in 2012, the Stanford–Binet is a modified version of the Binet-Simon Intelligence scale. The Binet-Simon scale was created by the French psychologist Alfred Binet and his student Theodore Simon. Due to changing education laws of the time, Binet had been requested by a government commission to come up with a way to detect children who were falling behind developmentally and in need of help. Binet believed that intelligence is malleable and that intelligence tests would help target children in need of extra attention to advance their intelligence.

To create their test, Binet and Simon first created a baseline of intelligence. A wide range of children were tested on a broad spectrum of measures in an effort to discover a clear indicator of intelligence. Failing to find a single identifier of intelligence, Binet and Simon instead compared children in each category by age. The children's highest levels of achievement were sorted by age and common levels of achievement considered the normal level for that age. Because this testing method merely compares a person's ability to the common ability level of others their age, the general practices of the test can easily be transferred to test different populations, even if the measures used are changed.

Reproduction of an item from the 1908 Binet–Simon intelligence scale, that shows three pairs of pictures, and asks the tested child, "Which of these two faces is the prettier?" Reproduced from the article "A Practical Guide for Administering the Binet-Simon Scale for Measuring Intelligence" by J. W. Wallace Wallin in the December 1911 issue of the journal The Psychological Clinic (volume 5 number 7), public domain

One of the first intelligence tests, the Binet-Simon test quickly gained support in the psychological community, many of whom further spread it to the public. Lewis M. Terman, a psychologist at Stanford University, was one of the first to create a version of the test for people in the United States, naming the localized version the Stanford–Binet Intelligence Scale. Terman used the test not only to help identify children with learning difficulties but also to find children and adults who had above average levels of intelligence. In creating his version, Terman also tested additional methods for his Stanford revision, publishing his first official version as The Measurement of Intelligence: An Explanation of and a Complete Guide for the Use of the Stanford Revision and Extension of the Binet-Simon Intelligence Scale (Fancher & Rutherford, 2012) (Becker, 2003).

The original tests in the 1905 form include:

  1. "Le Regard"
  2. Prehension Provoked by a Tactile Stimulus
  3. Prehension Provoked by a Visual Perception
  4. Recognition of Food
  5. Quest of Food Complicated by a Slight Mechanical Difficulty
  6. Execution of Simple Commands and Imitation of Simple Gestures
  7. Verbal Knowledge of Objects
  8. Verbal Knowledge of Pictures
  9. Naming of Designated Objects
  10. Immediate Comparison of Two Lines of Unequal Lengths
  11. Repetition of Three Figures
  12. Comparison of Two Weights
  13. Suggestibility
  14. Verbal Definition of Known Objects
  15. Repetition of Sentences of Fifteen Words
  16. Comparison of Known Objects from Memory
  17. Exercise of Memory on Pictures
  18. Drawing a Design from Memory
  19. Immediate Repetition of Figures
  20. Resemblances of Several Known Objects Given from Memory
  21. Comparison of Lengths
  22. Five Weights to be Placed in Order
  23. Gap in Weights
  24. Exercise upon Rhymes
  25. Verbal Gaps to be Filled
  26. Synthesis of Three Words in One Sentence
  27. Reply to an Abstract Question
  28. Reversal of the Hands of a Clock
  29. Paper Cutting
  30. Definitions of Abstract Terms

Historical use

One hindrance to widespread understanding of the test is its use of a variety of different measures. In an effort to simplify the information gained from the Binet-Simon test into a more comprehensible and easier to understand form, German psychologist William Stern created the now well known Intelligence Quotient (IQ). By comparing the mental age a child scored at to their biological age, a ratio is created to show the rate of their mental progress as IQ. Terman quickly grasped the idea for his Stanford revision with the adjustment of multiplying the ratios by 100 to make them easier to read.

As also discussed by Leslie, in 2000, Terman was another of the main forces in spreading intelligence testing in the United States (Becker, 2003). Terman quickly promoted the use of the Stanford–Binet for schools across the United States where it saw a high rate of acceptance. Terman's work also had the attention of the U.S. government, who recruited him to apply the ideas from his Stanford–Binet test for military recruitment near the start of World War I. With over 1.7 million military recruits taking a version of the test and the acceptance of the test by the government, the Stanford–Binet saw an increase in awareness and acceptance (Fancher & Rutherford, 2012).

Given the perceived importance of intelligence and with new ways to measure intelligence, many influential individuals, including Terman, began promoting controversial ideas to increase the nation's overall intelligence. These ideas included things such as discouraging individuals with low IQ from having children and granting important positions based on high IQ scores. While there was significant opposition, many institutions proceeded to adjust students' education based on their IQ scores, often with a heavy influence on future career possibilities (Leslie, 2000).

Revisions of the Stanford–Binet Intelligence Scale

Maud Merrill

Since the first publication in 1916, there have been four additional revised editions of the Stanford–Binet Intelligence Scales, the first of which was developed by Lewis Terman. Over twenty years later, Maud Merrill was accepted into Stanford's education program shortly before Terman became the head of the psychology department. She completed both her Masters Degree and Ph.D. under Terman and quickly became a colleague of his as they started the revisions of the second edition together. There were 3,200 examinees, aged one and a half to eighteen years, ranging in different geographic regions as well as socioeconomic levels in attempts to comprise a broader normative sample (Roid & Barram, 2004). This edition incorporated more objectified scoring methods, while placing less emphasis on recall memory and including a greater range of nonverbal abilities (Roid & Barram, 2004) compared to the 1916 edition.

When Terman died in 1956, the revisions for the third edition were well underway, and Merrill was able to publish the final revision in 1960 (Roid & Barram, 2004). The use of deviation IQ made its first appearance in third edition, however the use of the mental age scale and ratio IQ were not eliminated. Terman and Merrill attempted to calculate IQs with a uniform standard deviation while still maintaining the use of the mental age scale by including a formula in the manual to convert the ratio IQs with means varying between age ranges and nonuniform standard deviations to IQs with a mean of 100 and a uniform standard deviation of 16. However, it was later demonstrated that very high scores occurred with much greater frequency than what would be predicted by the normal curve with a standard deviation of 16, and scores in the gifted range were much higher than those yielded by essentially every other major test, so it was deemed that the ratio IQs modified to have a uniform mean and standard deviation, referred to as "deviation IQs" in the manual of the third edition of the Stanford–Binet (Terman & Merrill, 1960), could not be directly compared to scores on "true" deviation IQ tests, such as the Wechsler Intelligence Scales, and the later versions of the Stanford–Binet, as those tests compare the performance of examinees to their own age group on a normal distribution (Ruf, 2003). While new features were added, there were no newly created items included in this revision. Instead, any items from the 1937 form that showed no substantial change in difficulty from the 1930s to the 1950s were either eliminated or adjusted (Roid & Barram, 2004).

Robert Thorndike was asked to take over after Merrill's retirement. With the help of Elizabeth Hagen and Jerome Sattler, Thorndike produced the fourth edition of the Stanford–Binet Intelligence Scale in 1986. This edition covers the ages two through twenty-three and has some considerable changes compared to its predecessors (Graham & Naglieri, 2003). This edition was the first to use the fifteen subtests with point scales in place of using the previous age scale format. In an attempt to broaden cognitive ability, the subtests were grouped and resulted in four area scores, which improved flexibility for administration and interpretation (Youngstrom, Glutting, & Watkins, 2003). The fourth edition is known for assessing children that may be referred for gifted programs. This edition includes a broad range of abilities, which provides more challenging items for those in their early adolescent years, whereas other intelligence tests of the time did not provide difficult enough items for the older children (Laurent, Swerdlik, & Ryburn, 1992).

Gale Roid published the most recent edition of the Stanford–Binet Intelligence Scale. Roid attended Harvard University where he was a research assistant to David McClelland. McClelland is well known for his studies on the need for achievement. While the fifth edition incorporates some of the classical traditions of these scales, there were several significant changes made.

Timeline

  • April 1905: Development of Binet-Simon Test announced at a conference in Rome
  • June 1905: Binet-Simon Intelligence Test introduced
  • 1908 and 1911: New Versions of Binet-Simon Intelligence Test
  • 1916: Stanford–Binet First Edition by Terman
  • 1937: Second Edition by Terman and Merrill
  • 1960: L-M modified second edition by Merrill
  • 1973: Third Edition by Merrill (1937 re-normed)
  • 1986: Fourth Edition by Thorndike, Hagen, and Sattler
  • 2003: Fifth Edition by Roid

Stanford–Binet Intelligence Scale: Fifth Edition

Just as it was used when Binet first developed the IQ test, the Stanford–Binet Intelligence Scale: Fifth Edition (SB5) is based in the schooling process to assess intelligence. It continuously and efficiently assesses all levels of ability in individuals with a broader range in age. It is also capable of measuring multiple dimensions of abilities (Ruf, 2003).

The SB5 can be administered to individuals as early as two years of age. There are ten subsets included in this revision including both verbal and nonverbal domains. Five factors are also incorporated in this scale, which are directly related to Cattell-Horn-Carroll (CHC) hierarchical model of cognitive abilities. These factors include fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing, and working memory (Bain & Allin, 2005). Many of the familiar picture absurdities, vocabulary, memory for sentences, and verbal absurdities still remain from the previous editions (Janzen, Obrzut, & Marusiak, 2003), however with more modern artwork and item content for the revised fifth edition.

For every verbal subtest that is used, there is a nonverbal counterpart across all factors. These nonverbal tasks consist of making movement responses such as pointing or assembling manipulatives (Bain & Allin, 2005). These counterparts have been included to address language-reduced assessments in multicultural societies. Depending on age and ability, administration can range from fifteen minutes to an hour and fifteen minutes.

The fifth edition incorporated a new scoring system, which can provide a wide range of information such as four intelligence score composites, five factor indices, and ten subtest scores. Additional scoring information includes percentile ranks, age equivalents, and a change-sensitive score (Janzen, Obrzut, & Marusiak, 2003). Extended IQ scores and gifted composite scores are available with the SB5 in order to optimize the assessment for gifted programs (Ruf, 2003). To reduce errors and increase diagnostic precision, scores are obtained electronically through the use of computers now.

The standardization sample for the SB5 included 4,800 participants varying in age, sex, race/ethnicity, geographic region, and socioeconomic level (Bain & Allin, 2005).

Reliability

Several reliability tests have been performed on the SB5 including split-half reliability, standard error of measurement, plotting of test information curves, test-retest stability, and inter-scorer agreement. On average, IQ scores for this scale have been found quite stable across time (Janzen, Obrzut, & Marusiak, 2003). Internal consistency was tested by split-half reliability and was reported to be substantial and comparable to other cognitive batteries (Bain & Allin, 2005). The median interscorer correlation was .90 on average (Janzen, Obrzut, & Marusiak, 2003). The SB5 has also been found to have great precision at advanced levels of performance meaning that the test is especially useful in testing children for giftedness (Bain & Allin, 2005). There have only been a small amount of practice effects and familiarity of testing procedures with retest reliability; however, these have proven to be insignificant. Readministration of the SB5 can occur in a six-month interval rather than one year due to the small mean differences in reliability (Bain & Allin, 2005).

Validity

Content validity has been found based on the professional judgments Roid received concerning fairness of items and item content as well as items concerning the assessment of giftedness (Bain & Allin, 2005). With an examination of age trends, construct validity was supported along with empirical justification of a more substantial g loading for the SB5 compared to previous editions. The potential for a variety of comparisons, especially for within or across factors and verbal/nonverbal domains, has been appreciated with the scores received from the SB5 (Bain & Allin, 2005).

Score classification

The test publisher includes suggested score classifications in the test manual.

Stanford–Binet Fifth Edition (SB5) classification
IQ Range ("deviation IQ") IQ Classification
145–160 Very gifted or highly advanced
130–144 Gifted or very advanced
120–129 Superior
110–119 High average
90–109 Average
80–89 Low average
70–79 Borderline impaired or delayed
55–69 Mildly impaired or delayed
40–54 Moderately impaired or delayed

The classifications of scores used in the Fifth Edition differ from those used in earlier versions of the test.

Subtests and factors

Fluid reasoning Knowledge Quantitative reasoning Visual-spatial processing Working memory
Early reasoning Vocabulary Non-verbal quantitative reasoning (non-verbal) Form board and form patterns

(non-verbal)

Delayed response (non-verbal)
Verbal absurdities Procedural knowledge (non-verbal) Verbal quantitative reasoning Position and direction Block span (non-verbal)
Verbal analogies Picture absurdities (non-verbal)

Memory for sentences
Object series matrices (non-verbal)


Last word

Present use

Since its inception, the Stanford–Binet has been revised several times. Currently, the test is in its fifth edition, which is called the Stanford–Binet Intelligence Scales, Fifth Edition, or SB5. According to the publisher's website, "The SB5 was normed on a stratified random sample of 4,800 individuals that matches the 2000 U.S. Census". By administering the Stanford–Binet test to large numbers of individuals selected at random from different parts of the United States, it has been found that the scores approximate a normal distribution. The revised edition of the Stanford–Binet over time has devised substantial changes in the way the tests are presented. The test has improved when looking at the introduction of a more parallel form and more demonstrative standards. For one, a non-verbal IQ component is included in the present day tests whereas in the past, there was only a verbal component. In fact, it now has equal balance of verbal and non-verbal content in the tests. It is also more animated than the other tests, providing the test-takers with more colourful artwork, toys and manipulatives. This allows the test to have a higher range in the age of the test takers. This test is purportedly useful in assessing the intellectual capabilities of people ranging from young children all the way to young adults. However, the test has come under criticism for not being able to compare people of different age categories, since each category gets a different set of tests. Furthermore, very young children tend to do poorly on the test due to the fact that they lack the ability to concentrate long enough to finish it.

Current uses for the test include clinical and neuropsychological assessment, educational placement, compensation evaluations, career assessment, adult neuropsychological treatment, forensics, and research on aptitude. Various high-IQ societies also accept this test for admission into their ranks; for example, the Triple Nine Society accepts a minimum qualifying score of 151 for Form L or M, 149 for Form L-M if taken in 1986 or earlier, 149 for SB-IV, and 146 for SB-V; in all cases the applicant must have been at least 16 years old at the date of the test. Intertel accepts a score of 135 on SB5 and 137 on Form L-M.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...