Search This Blog

Thursday, July 26, 2018

Second Industrial Revolution

From Wikipedia, the free encyclopedia

The Second Industrial Revolution, also known as the Technological Revolution, was a phase of rapid industrialization in the final third of the 19th century and the beginning of the 20th. The First Industrial Revolution, which ended in the early to mid 1800s, was punctuated by a slowdown in macroinventions before the Second Industrial Revolution in 1870. Though a number of its characteristic events can be traced to earlier innovations in manufacturing, such as the establishment of a machine tool industry, the development of methods for manufacturing interchangeable parts and the invention of the Bessemer Process to produce steel, the Second Industrial Revolution is generally dated between 1870 and 1914 (the start of World War I).

Advancements in manufacturing and production technology enabled the widespread adoption of preexisting technological systems such as telegraph and railroad networks, gas and water supply, and sewage systems, which had earlier been concentrated to a few select cities. The enormous expansion of rail and telegraph lines after 1870 allowed unprecedented movement of people and ideas, which culminated in a new wave of globalization. In the same time period, new technological systems were introduced, most significantly electrical power and telephones. The Second Industrial Revolution continued into the 20th century with early factory electrification and the production line, and ended at the start of World War I.

Overview

The Second Industrial Revolution was a period of rapid industrial development, primarily in Britain, Germany and the United States, but also in France, the Low Countries, Italy and Japan. It followed on from the First Industrial Revolution that began in Britain in the late 18th century that then spread throughout Western Europe and North America. It was characterized by the build out of railroads, large-scale iron and steel production, widespread use of machinery in manufacturing, greatly increased use of steam power, widespread use of the telegraph, use of petroleum and the beginning of electrification. It also was the period during which modern organizational methods for operating large scale businesses over vast areas came into use.

The concept was introduced by Patrick Geddes, Cities in Evolution (1910), but David Landes' use of the term in a 1966 essay and in The Unbound Prometheus (1972) standardized scholarly definitions of the term, which was most intensely promoted by Alfred Chandler (1918–2007). However, some continue to express reservations about its use.[3]

Landes (2003) stresses the importance of new technologies, especially, the internal combustion engine and petroleum, new materials and substances, including alloys and chemicals, electricity and communication technologies (such as the telegraph, telephone and radio).

Vaclav Smil called the period 1867–1914 "The Age of Synergy" during which most of the great innovations were developed. Unlike the First Industrial Revolution, the inventions and innovations were engineering and science-based.[4]

Industry and technology

A synergy between iron and steel, railroads and coal developed at the beginning of the Second Industrial Revolution. Railroads allowed cheap transportation of materials and products, which in turn led to cheap rails to build more roads. Railroads also benefited from cheap coal for their steam locomotives. This synergy led to the laying of 75,000 miles of track in the U.S. in the 1880s, the largest amount anywhere in world history.[5]

Iron

The hot blast technique, in which the hot flue gas from a blast furnace is used to preheat combustion air blown into a blast furnace, was invented and patented by James Beaumont Neilson in 1828 at Wilsontown Ironworks in Scotland. Hot blast was the single most important advance in fuel efficiency of the blast furnace as it greatly reduced the fuel consumption for making pig iron, and was one of the most important technologies developed during the Industrial Revolution.[6] Falling costs for producing wrought iron coincided with the emergence of the railway in the 1830s.

The early technique of hot blast used iron for the regenerative heating medium. Iron caused problems with expansion and contraction, which stressed the iron and caused failure. Edward Alfred Cowper developed the Cowper stove in 1857.[7] This stove used firebrick as a storage medium, solving the expansion and cracking problem. The Cowper stove was also capable of producing high heat, which resulted in very high throughput of blast furnaces. The Cowper stove is still used in today's blast furnaces.

With the greatly reduced cost of producing pig iron with coke using hot blast, demand grew dramatically and so did the size of blast furnaces.[8][9]

Steel

A diagram of the Bessemer converter. Air blown through holes in the converter bottom creates a violent reaction in the molten pig iron that oxidizes the excess carbon, converting the pig iron to pure iron or steel, depending on the residual carbon.

The Bessemer process, invented by Sir Henry Bessemer, allowed the mass-production of steel, increasing the scale and speed of production of this vital material, and decreasing the labor requirements. The key principle was the removal of excess carbon and other impurities from pig iron by oxidation with air blown through the molten iron. The oxidation also raises the temperature of the iron mass and keeps it molten.

The "acid" Bessemer process had a serious limitation in that it required relatively scarce hematite ore[10] which is low in phosphorus. Sidney Gilchrist Thomas developed a more sophisticated process to eliminate the phosphorus from iron. Collaborating with his cousin, Percy Gilchrist a chemist at the Blaenavon Ironworks, Wales, he patented his process in 1878;[11] Bolckow Vaughan & Co. in Yorkshire was the first company to use his patented process.[12] His process was especially valuable on the continent of Europe, where the proportion of phosphoric iron was much greater than in England, and both in Belgium and in Germany the name of the inventor became more widely known than in his own country. In America, although non-phosphoric iron largely predominated, an immense interest was taken in the invention.[12]
 
The Barrow Hematite Steel Company operated 18 Bessemer converters and owned the largest steelworks in the world at the turn of the 20th century.

The next great advance in steel making was the Siemens-Martin process. Sir Charles William Siemens developed his regenerative furnace in the 1850s, for which he claimed in 1857 to able to recover enough heat to save 70–80% of the fuel. The furnace operated at a high temperature by using regenerative preheating of fuel and air for combustion. Through this method, an open-hearth furnace can reach temperatures high enough to melt steel, but Siemens did not initially use it in that manner.

French engineer Pierre-Émile Martin was the first to take out a license for the Siemens furnace and apply it to the production of steel in 1865. The Siemens-Martin process complemented rather than replaced the Bessemer process. Its main advantages were that it did not expose the steel to excessive nitrogen (which would cause the steel to become brittle), it was easier to control, and that it permitted the melting and refining of large amounts of scrap steel, lowering steel production costs and recycling an otherwise troublesome waste material. It became the leading steel making process by the early 20th century.

The availability of cheap steel allowed building larger bridges, railroads, skyscrapers, and ships.[13] Other important steel products—also made using the open hearth process—were steel cable, steel rod and sheet steel which enabled large, high-pressure boilers and high-tensile strength steel for machinery which enabled much more powerful engines, gears and axles than were previously possible. With large amounts of steel it became possible to build much more powerful guns and carriages, tanks, armored fighting vehicles and naval ships.

Rail

A rail rolling mill in Donetsk, 1887.

The increase in steel production from the 1860s meant that railroads could finally be made from steel at a competitive cost. Being a much more durable material, steel steadily replaced iron as the standard for railway rail, and due to its greater strength, longer lengths of rails could now be rolled. Wrought iron was soft and contained flaws caused by included dross. Iron rails could also not support heavy locomotives and was damaged by hammer blow. The first to make durable rails of steel rather than wrought iron was Robert Forester Mushet at the Darkhill Ironworks, Gloucestershire in 1857.

The first of his steel rails was sent to Derby Midland railway station. They were laid at part of the station approach where the iron rails had to be renewed at least every six months, and occasionally every three. Six years later, in 1863, the rail seemed as perfect as ever, although some 700 trains had passed over it daily.[14] This provided the basis for the accelerated construction of rail transportation throughout the world in the late nineteenth century. Steel rails lasted over ten times longer than did iron,[15] and with the falling cost of steel, heavier weight rails were used. This allowed the use of more powerful locomotives, which could pull longer trains, and longer rail cars, all of which greatly increased the productivity of railroads.[16] Rail became the dominant form of transport infrastructure throughout the industrialized world,[17] producing a steady decrease in the cost of shipping seen for the rest of the century.[18]

Electrification

The theoretical and practical basis for the harnessing of electric power was laid by the scientist and experimentalist Michael Faraday. Through his research on the magnetic field around a conductor carrying a direct current, Faraday established the basis for the concept of the electromagnetic field in physics.[19][20] His inventions of electromagnetic rotary devices were the foundation of the practical use of electricity in technology.

U.S. Patent#223898: Electric-Lamp. Issued January 27, 1880.

In 1881, Sir Joseph Swan, inventor of the first feasible incandescent light bulb, supplied about 1,200 Swan incandescent lamps to the Savoy Theatre in the City of Westminster, London, which was the first theatre, and the first public building in the world, to be lit entirely by electricity.[21][22] Swan's lightbulb had already been used in 1879 to light Mosley Street, in Newcastle upon Tyne, the first electrical street lighting installation in the world.[23][24] This set the stage for the electrification of industry and the home. The first large scale central distribution supply plant was opened at Holborn Viaduct in London in 1882[25] and later at Pearl Street Station in New York City.[26]
 
Three-phase rotating magnetic field of an AC motor. The three poles are each connected to a separate wire. Each wire carries current 120 degrees apart in phase. Arrows show the resulting magnetic force vectors. Three phase current is used in commerce and industry.

The first modern power station in the world was built by the English electrical engineer Sebastian de Ferranti at Deptford. Built on an unprecedented scale and pioneering the use of high voltage (10,000V) alternating current, it generated 800 kilowatts and supplied central London. On its completion in 1891 it supplied high-voltage AC power that was then "stepped down" with transformers for consumer use on each street. Electrification allowed the final major developments in manufacturing methods of the Second Industrial Revolution, namely the assembly line and mass production.[27]

Electrification was called "the most important engineering achievement of the 20th century" by the National Academy of Engineering.[28] Electric lighting in factories greatly improved working conditions, eliminating the heat and pollution caused by gas lighting, and reducing the fire hazard to the extent that the cost of electricity for lighting was often offset by the reduction in fire insurance premiums. Frank J. Sprague developed the first successful DC motor in 1886. By 1889 110 electric street railways were either using his equipment or in planning. The electric street railway became a major infrastructure before 1920. The AC (Induction motor) was developed in the 1890s and soon began to be used in the electrification of industry.[29] Household electrification did not become common until the 1920s, and then only in cities. Fluorescent lighting was commercially introduced at the 1939 World's Fair.

Electrification also allowed the inexpensive production of electro-chemicals, such as aluminium, chlorine, sodium hydroxide, and magnesium.[30]

Machine tools

A graphic representation of formulas for the pitches of threads of screw bolts.

The use of machine tools began with the onset of the First Industrial Revolution. The increase in mechanization required more metal parts, which were usually made of cast iron or wrought iron—and hand working lacked precision and was a slow and expensive process. One of the first machine tools was John Wilkinson's boring machine, that bored a precise hole in James Watt's first steam engine in 1774. Advances in the accuracy of machine tools can be traced to Henry Maudslay and refined by Joseph Whitworth. Standardization of screw threads began with Henry Maudslay around 1800, when the modern screw-cutting lathe made interchangeable V-thread machine screws a practical commodity.

In 1841, Joseph Whitworth created a design that, through its adoption by many British railroad companies, became the world's first national machine tool standard called British Standard Whitworth.[31] During the 1840s through 1860s, this standard was often used in the United States and Canada as well, in addition to myriad intra- and inter-company standards.

The importance of machine tools to mass production is shown by the fact that production of the Ford Model T used 32,000 machine tools, most of which were powered by electricity.[32] Henry Ford is quoted as saying that mass production would not have been possible without electricity because it allowed placement of machine tools and other equipment in the order of the work flow.[33]

Paper making

The first paper making machine was the Fourdrinier machine, built by Sealy and Henry Fourdrinier, stationers in London. In 1800, Matthias Koops, working in London, investigated the idea of using wood to make paper, and began his printing business a year later. However, his enterprise was unsuccessful due to the prohibitive cost at the time.[34][35][36]
It was in the 1840s, that Charles Fenerty in Nova Scotia and Friedrich Gottlob Keller in Saxony both invented a successful machine which extracted the fibres from wood (as with rags) and from it, made paper. This started a new era for paper making,[37] and, together with the invention of the fountain pen and the mass-produced pencil of the same period, and in conjunction with the advent of the steam driven rotary printing press, wood based paper caused a major transformation of the 19th century economy and society in industrialized countries. With the introduction of cheaper paper, schoolbooks, fiction, non-fiction, and newspapers became gradually available by 1900. Cheap wood based paper also allowed keeping personal diaries or writing letters and so, by 1850, the clerk, or writer, ceased to be a high-status job. By the 1880s chemical processes for paper manufacture were in use, becoming dominant by 1900.

Petroleum

The petroleum industry, both production and refining, began in 1848 with the first oil works in Scotland. The chemist James Young set up a small business refining the crude oil in 1848. Young found that by slow distillation he could obtain a number of useful liquids from it, one of which he named "paraffine oil" because at low temperatures it congealed into a substance resembling paraffin wax.[38] In 1850 Young built the first truly commercial oil-works and oil refinery in the world at Bathgate, using oil extracted from locally mined torbanite, shale, and bituminous coal to manufacture naphtha and lubricating oils; paraffin for fuel use and solid paraffin were not sold till 1856.
Cable tool drilling was developed in ancient China and was used for drilling brine wells. The salt domes also held natural gas, which some wells produced and which was used for evaporation of the brine. Chinese well drilling technology was introduced to Europe in 1828.[39]

Although there were many efforts in the mid-19th century to drill for oil Edwin Drake's 1859 well near Titusville, Pennsylvania, is considered the first "modern oil well".[40] Drake's well touched off a major boom in oil production in the United States.[41] Drake learned of cable tool drilling from Chinese laborers in the U. S.[42] The first primary product was kerosene for lamps and heaters.[30][43] Similar developments around Baku fed the European market.

Kerosene lighting was much more efficient and less expensive than vegetable oils, tallow and whale oil. Although town gas lighting was available in some cities, kerosene produced a brighter light until the invention of the gas mantle. Both were replaced by electricity for street lighting following the 1890s and for households during the 1920s. Gasoline was an unwanted byproduct of oil refining until automobiles were mass-produced after 1914, and gasoline shortages appeared during World War I. The invention of the Burton process for thermal cracking doubled the yield of gasoline, which helped alleviate the shortages.[43]

Chemical

The BASF-chemical factories in Ludwigshafen, Germany, 1881

Synthetic dye was discovered by English chemist William Henry Perkin in 1856. At the time, chemistry was still in a quite primitive state; it was still a difficult proposition to determine the arrangement of the elements in compounds and chemical industry was still in its infancy. Perkin's accidental discovery was that aniline could be partly transformed into a crude mixture which when extracted with alcohol produced a substance with an intense purple colour. He scaled up production of the new "mauveine", and commercialized it as the world's first synthetic dye.[44]

After the discovery of mauveine, many new aniline dyes appeared (some discovered by Perkin himself), and factories producing them were constructed across Europe. Towards the end of the century, Perkin and other British companies found their research and development efforts increasingly eclipsed by the German chemical industry which became world dominant by 1914.

Maritime technology

A crowd of people watch a large black and red ship with one funnel and six masts adorned with flags
The launch of Great Britain, which was advanced for her time, 1843.

This era saw the birth of the modern ship as disparate technological advances came together.
The screw propeller was introduced in 1835 by Francis Pettit Smith who discovered a new way of building propellers by accident. Up to that time, propellers were literally screws, of considerable length. But during the testing of a boat propelled by one, the screw snapped off, leaving a fragment shaped much like a modern boat propeller. The boat moved faster with the broken propeller.[45] The superiority of screw against paddles was taken up by navies. Trials with Smith's SS Archimedes, the first steam driven screw, led to the famous tug-of-war competition in 1845 between the screw-driven HMS Rattler and the paddle steamer HMS Alecto; the former pulling the latter backward at 2.5 knots (4.6 km/h).

The first seagoing iron steamboat was built by Horseley Ironworks and named the Aaron Manby. It also used an innovative oscillating engine for power. The boat was built at Tipton using temporary bolts, disassembled for transportation to London, and reassembled on the Thames in 1822, this time using permanent rivets.

Other technological developments followed, including the invention of the surface condenser, which allowed boilers to run on purified water rather than salt water, eliminating the need to stop to clean them on long sea journeys. The Great Western[46] ,[47][48] built by engineer Isambard Kingdom Brunel, was the longest ship in the world at 236 ft (72 m) with a 250-foot (76 m) keel and was the first to prove that transatlantic steamship services were viable. The ship was constructed mainly from wood, but Brunel added bolts and iron diagonal reinforcements to maintain the keel's strength. In addition to its steam-powered paddle wheels, the ship carried four masts for sails.

Brunel followed this up with the Great Britain, launched in 1843 and considered the first modern ship built of metal rather than wood, powered by an engine rather than wind or oars, and driven by propeller rather than paddle wheel.[49] Brunel's vision and engineering innovations made the building of large-scale, propeller-driven, all-metal steamships a practical reality, but the prevailing economic and industrial conditions meant that it would be several decades before transoceanic steamship travel emerged as a viable industry.

Highly efficient multiple expansion steam engines began being used on ships, allowing them to carry less coal than freight.[50] The oscillating engine was first built by Aaron Manby and Joseph Maudslay in the 1820s as a type of direct-acting engine that was designed to achieve further reductions in engine size and weight. Oscillating engines had the piston rods connected directly to the crankshaft, dispensing with the need for connecting rods. In order to achieve this aim, the engine cylinders were not immobile as in most engines, but secured in the middle by trunnions which allowed the cylinders themselves to pivot back and forth as the crankshaft rotated, hence the term oscillating.

It was John Penn, engineer for the Royal Navy who perfected the oscillating engine. One of his earliest engines was the grasshopper beam engine. In 1844 he replaced the engines of the Admiralty yacht, HMS Black Eagle with oscillating engines of double the power, without increasing either the weight or space occupied, an achievement which broke the naval supply dominance of Boulton & Watt and Maudslay, Son & Field. Penn also introduced the trunk engine for driving screw propellers in vessels of war. HMS Encounter (1846) and HMS Arrogant (1848) were the first ships to be fitted with such engines and such was their efficacy that by the time of Penn's death in 1878, the engines had been fitted in 230 ships and were the first mass-produced, high-pressure and high-revolution marine engines.[51]

The revolution in naval design led to the first modern battleships in the 1870s, evolved from the ironclad design of the 1860s. The Devastation-class turret ships were built for the British Royal Navy as the first class of ocean-going capital ship that did not carry sails, and the first whose entire main armament was mounted on top of the hull rather than inside it.

Rubber

The vulcanization of rubber, by American Charles Goodyear and Englishman Thomas Hancock in the 1840s paved the way for a growing rubber industry, especially the manufacture of rubber tyres[52]
John Boyd Dunlop developed the first practical pneumatic tyre in 1887 in South Belfast. Willie Hume demonstrated the supremacy of Dunlop's newly invented pneumatic tyres in 1889, winning the tyre's first ever races in Ireland and then England.[53] [54] Dunlop's development of the pneumatic tyre arrived at a crucial time in the development of road transport and commercial production began in late 1890.

Bicycles

The modern bicycle was designed by the English engineer Harry John Lawson in 1876, although it was John Kemp Starley who produced the first commercially successful safety bicycle a few years later.[55] Its popularity soon grew, causing the bike boom of the 1890s.

Road networks improved greatly in the period, using the Macadam method pioneered by Scottish engineer John Loudon McAdam, and hard surfaced roads were built around the time of the bicycle craze of the 1890s. Modern tarmac was patented by British civil engineer Edgar Purnell Hooley in 1901.[56]

Automobile

German inventor Karl Benz patented the world's first automobile in 1886. It featured wire wheels (unlike carriages' wooden ones)[57] with a four-stroke engine of his own design between the rear wheels, with a very advanced coil ignition [58] and evaporative cooling rather than a radiator.[58] Power was transmitted by means of two roller chains to the rear axle. It was the first automobile entirely designed as such to generate its own power, not simply a motorized-stage coach or horse carriage.

Benz began to sell the vehicle (advertising it as the Benz Patent Motorwagen) in the late summer of 1888, making it the first commercially available automobile in history.

Henry Ford built his first car in 1896 and worked as a pioneer in the industry, with others who would eventually form their own companies, until the founding of Ford Motor Company in 1903.[27] Ford and others at the company struggled with ways to scale up production in keeping with Henry Ford's vision of a car designed and manufactured on a scale so as to be affordable by the average worker.[27] The solution that Ford Motor developed was a completely redesigned factory with machine tools and special purpose machines that were systematically positioned in the work sequence. All unnecessary human motions were eliminated by placing all work and tools within easy reach, and where practical on conveyors, forming the assembly line, the complete process being called mass production. This was the first time in history when a large, complex product consisting of 5000 parts had been produced on a scale of hundreds of thousands per year.[27][32] The savings from mass production methods allowed the price of the Model T to decline from $780 in 1910 to $360 in 1916. In 1924 2 million T-Fords were produced and retailed $290 each.[59]

Applied science

Applied science opened many opportunities. By the middle of the 19th century there was a scientific understanding of chemistry and a fundamental understanding of thermodynamics and by the last quarter of the century both of these sciences were near their present-day basic form. Thermodynamic principles were used in the development of physical chemistry. Understanding chemistry greatly aided the development of basic inorganic chemical manufacturing and the aniline dye industries.

The science of metallurgy was advanced through the work of Henry Clifton Sorby and others. Sorby pioneered the study of iron and steel under microscope, which paved the way for a scientific understanding of metal and the mass-production of steel. In 1863 he used etching with acid to study the microscopic structure of metals and was the first to understand that a small but precise quantity of carbon gave steel its strength.[60] This paved the way for Henry Bessemer and Robert Forester Mushet to develop the method for mass-producing steel.

Other processes were developed for purifying various elements such as chromium, molybdenum, titanium, vanadium and nickel which could be used for making alloys with special properties, especially with steel. Vanadium steel, for example, is strong and fatigue resistant, and was used in half the automotive steel.[61] Alloy steels were used for ball bearings which were used in large scale bicycle production in the 1880s. Ball and roller bearings also began being used in machinery. Other important alloys are used in high temperatures, such as steam turbine blades, and stainless steels for corrosion resistance.

The work of Justus von Liebig and August Wilhelm von Hofmann laid the groundwork for modern industrial chemistry. Liebig is considered the "father of the fertilizer industry" for his discovery of nitrogen as an essential plant nutrient and went on to establish Liebig's Extract of Meat Company which produced the Oxo meat extract. Hofmann headed a school of practical chemistry in London, under the style of the Royal College of Chemistry, introduced modern conventions for molecular modeling and taught Perkin who discovered the first synthetic dye.

The science of thermodynamics was developed into its modern form by Sadi Carnot, William Rankine, Rudolf Clausius, William Thomson, James Clerk Maxwell, Ludwig Boltzmann and J. Willard Gibbs. These scientific principles were applied to a variety of industrial concerns, including improving the efficiency of boilers and steam turbines. The work of Michael Faraday and others was pivotal in laying the foundations of the modern scientific understanding of electricity.

Scottish scientist James Clerk Maxwell was particularly influential—his discoveries ushered in the era of modern physics.[62] His most prominent achievement was to formulate a set of equations that described electricity, magnetism, and optics as manifestations of the same phenomenon, namely the electromagnetic field.[63] The unification of light and electrical phenomena led to the prediction of the existence of radio waves and was the basis for the future development of radio technology by Hughes, Marconi and others.[64]

Maxwell himself developed the first durable colour photograph in 1861 and published the first scientific treatment of control theory.[65][66] Control theory is the basis for process control, which is widely used in automation, particularly for process industries, and for controlling ships and airplanes.[67] Control theory was developed to analyze the functioning of centrifugal governors on steam engines. These governors came into use in the late 18th century on wind and water mills to correctly position the gap between mill stones, and were adapted to steam engines by James Watt. Improved versions were used to stabilize automatic tracking mechanisms of telescopes and to control speed of ship propellers and rudders. However, those governors were sluggish and oscillated about the set point. James Clerk Maxwell wrote a paper mathematically analyzing the actions of governors, which marked the beginning of the formal development of control theory. The science was continually improved and evolved into an engineering discipline.

Fertilizer

Justus von Liebig was the first to understand the importance of ammonia as fertilizer, and promoted the importance of inorganic minerals to plant nutrition. In England, he attempted to implement his theories commercially through a fertilizer created by treating phosphate of lime in bone meal with sulfuric acid. Another pioneer was John Bennet Lawes who began to experiment on the effects of various manures on plants growing in pots in 1837, leading to a manure formed by treating phosphates with sulphuric acid; this was to be the first product of the nascent artificial manure industry.[68]

The discovery of coprolites in commercial quantities in East Anglia, led Fisons and Edward Packard to develop one of the first large-scale commercial fertilizer plants at Bramford, and Snape in the 1850s. By the 1870s superphosphates produced in those factories, were being shipped around the world from the port at Ipswich.[69][70]

The Birkeland–Eyde process was developed by Norwegian industrialist and scientist Kristian Birkeland along with his business partner Sam Eyde in 1903,[71] but was soon replaced by the much more efficient Haber process,[72] developed by the Nobel prize-winning chemists Carl Bosch of IG Farben and Fritz Haber in Germany.[73] The process utilized molecular nitrogen (N2) and methane (CH4) gas in an economically sustainable synthesis of ammonia (NH3). The ammonia produced in the Haber process is the main raw material for production of nitric acid.

Engines and turbines

The steam turbine was developed by Sir Charles Parsons in 1884. His first model was connected to a dynamo that generated 7.5 kW (10 hp) of electricity.[74] The invention of Parson's steam turbine made cheap and plentiful electricity possible and revolutionized marine transport and naval warfare.[75] By the time of Parson's death, his turbine had been adopted for all major world power stations.[76] Unlike earlier steam engines, the turbine produced rotary power rather than reciprocating power which required a crank and heavy flywheel. The large number of stages of the turbine allowed for high efficiency and reduced size by 90%. The turbine's first application was in shipping followed by electric generation in 1903.

The first widely used internal combustion engine was the Otto type of 1876. From the 1880s until electrification it was successful in small shops because small steam engines were inefficient and required too much operator attention.[4] The Otto engine soon began being used to power automobiles, and remains as today's common gasoline engine.

The diesel engine was independently designed by Rudolf Diesel and Herbert Akroyd Stuart in the 1890s using thermodynamic principles with the specific intention of being highly efficient. It took several years to perfect and become popular, but found application in shipping before powering locomotives. It remains the world's most efficient prime mover.[4]

Telecommunications

Major telegraph lines in 1891.

The first commercial telegraph system was installed by Sir William Fothergill Cooke and Charles Wheatstone in May 1837 between Euston railway station and Camden Town in London.[77]

The rapid expansion of telegraph networks took place throughout the century, with the first undersea cable being built by John Watkins Brett between France and England. The Atlantic Telegraph Company was formed in London in 1856 to undertake to construct a commercial telegraph cable across the Atlantic Ocean. This was successfully completed on 18 July 1866 by the ship SS Great Eastern, captained by Sir James Anderson after many mishaps along the away.[78] From the 1850s until 1911, British submarine cable systems dominated the world system. This was set out as a formal strategic goal, which became known as the All Red Line.[79]

The telephone was patented in 1876 by Alexander Graham Bell, and like the early telegraph, it was used mainly to speed business transactions.[80]

As mentioned above, one of the most important scientific advancements in all of history was the unification of light, electricity and magnetism through Maxwell's electromagnetic theory. A scientific understanding of electricity was necessary for the development of efficient electric generators, motors and transformers. David Edward Hughes and Heinrich Hertz both demonstrated and confirmed the phenomenon of electromagnetic waves that had been predicted by Maxwell.[4]

It was Italian inventor Guglielmo Marconi who successfully commercialized radio at the turn of the century.[81] He founded The Wireless Telegraph & Signal Company in Britain in 1897[82][83] and in the same year transmitted Morse code across Salisbury Plain, sent the first ever wireless communication over open sea[84] and made the first transatlantic transmission in 1901 from Poldhu, Cornwall to Signal Hill, Newfoundland. Marconi built high-powered stations on both sides of the Atlantic and began a commercial service to transmit nightly news summaries to subscribing ships in 1904.[85]

The key development of the vacuum tube by Sir John Ambrose Fleming in 1904 underpinned the development of modern electronics and radio broadcasting. Lee De Forest's subsequent invention of the triode allowed the amplification of electronic signals, which paved the way for radio broadcasting in the 1920s.

Modern business management

Railroads are credited with creating the modern business enterprise by scholars such as Alfred Chandler. Previously, the management of most businesses had consisted of individual owners or groups of partners, some of whom often had little daily hands-on operations involvement. Centralized expertise in the home office was not enough. A railroad required expertise available across the whole length of its trackage, to deal with daily crises, breakdowns and bad weather. A collision in Massachusetts in 1841 led to a call for safety reform. This led to the reorganization of railroads into different departments with clear lines of management authority. When the telegraph became available, companies built telegraph lines along the railroads to keep track of trains.[86]

Railroads involved complex operations and employed extremely large amounts of capital and ran a more complicated business compared to anything previous. Consequently, they needed better ways to track costs. For example, to calculate rates they needed to know the cost of a ton-mile of freight. They also needed to keep track of cars, which could go missing for months at a time. This led to what was called "railroad accounting", which was later adopted by steel and other industries, and eventually became modern accounting.[87]

Later in the Second Industrial Revolution, Frederick Winslow Taylor and others in America developed the concept of scientific management or Taylorism. Scientific management initially concentrated on reducing the steps taken in performing work (such as bricklaying or shoveling) by using analysis such as time-and-motion studies, but the concepts evolved into fields such as industrial engineering, manufacturing engineering, and business management that helped to completely restructure[citation needed] the operations of factories, and later entire segments of the economy.

Taylor's core principles included:[citation needed]
  • replacing rule-of-thumb work methods with methods based on a scientific study of the tasks
  • scientifically selecting, training, and developing each employee rather than passively leaving them to train themselves
  • providing "detailed instruction and supervision of each worker in the performance of that worker's discrete task"
  • dividing work nearly equally between managers and workers, such that the managers apply scientific-management principles to planning the work and the workers actually perform the tasks

Socio-economic impacts

The period from 1870 to 1890 saw the greatest increase in economic growth in such a short period as ever in previous history. Living standards improved significantly in the newly industrialized countries as the prices of goods fell dramatically due to the increases in productivity. This caused unemployment and great upheavals in commerce and industry, with many laborers being displaced by machines and many factories, ships and other forms of fixed capital becoming obsolete in a very short time span.[50]
"The economic changes that have occurred during the last quarter of a century -or during the present generation of living men- have unquestionably been more important and more varied than during any period of the world's history".[50]
Crop failures no longer resulted in starvation in areas connected to large markets through transport infrastructure.[50]

Massive improvements in public health and sanitation resulted from public health initiatives, such as the construction of the London sewerage system in the 1860s and the passage of laws that regulated filtered water supplies—(the Metropolis Water Act introduced regulation of the water supply companies in London, including minimum standards of water quality for the first time in 1852). This greatly reduced the infection and death rates from many diseases.

By 1870 the work done by steam engines exceeded that done by animal and human power. Horses and mules remained important in agriculture until the development of the internal combustion tractor near the end of the Second Industrial Revolution.[88]

Improvements in steam efficiency, like triple-expansion steam engines, allowed ships to carry much more freight than coal, resulting in greatly increased volumes of international trade. Higher steam engine efficiency caused the number of steam engines to increase several fold, leading to an increase in coal usage, the phenomenon being called the Jevons paradox.[89]

By 1890 there was an international telegraph network allowing orders to be placed by merchants in England or the US to suppliers in India and China for goods to be transported in efficient new steamships. This, plus the opening of the Suez Canal, led to the decline of the great warehousing districts in London and elsewhere, and the elimination of many middlemen.[50]

The tremendous growth in productivity, transportation networks, industrial production and agricultural output lowered the prices of almost all goods. This led to many business failures and periods that were called depressions that occurred as the world economy actually grew.[50] See also: Long depression.

The factory system centralized production in separate buildings funded and directed by specialists (as opposed to work at home). The division of labor made both unskilled and skilled labor more productive, and led to a rapid growth of population in industrial centers. The shift away from agriculture toward industry had occurred in Britain by the 1730s, when the percentage of the working population engaged in agriculture fell below 50%, a development that would only happen elsewhere (the Low Countries) in the 1830s and '40s. By 1890, the figure had fallen to under 10% percent and the vast majority of the British population was urbanized. This milestone was reached by the Low Countries and the US in the 1950s.[90]

Like the first industrial revolution, the second supported population growth and saw most governments protect their national economies with tariffs. Britain retained its belief in free trade throughout this period. The wide-ranging social impact of both revolutions included the remaking of the working class as new technologies appeared. The changes resulted in the creation of a larger, increasingly professional, middle class, the decline of child labor and the dramatic growth of a consumer-based, material culture.[91]

By 1900, the leaders in industrial production was Britain with 24% of the world total, followed by the US (19%), Germany (13%), Russia (9%) and France (7%). Europe together accounted for 62%.[92]
The great inventions and innovations of the Second Industrial Revolution are part of our modern life. They continued to be drivers of the economy until after WWII. Only a few major innovations occurred in the post-war era, some of which are: computers, semiconductors, the fiber optic network and the Internet, cellular telephones, combustion turbines (jet engines) and the Green Revolution.[93] Although commercial aviation existed before WWII, it became a major industry after the war.

United Kingdom

Relative per capita levels of industrialization, 1750-1910.[94]

New products and services were introduced which greatly increased international trade. Improvements in steam engine design and the wide availability of cheap steel meant that slow, sailing ships were replaced with faster steamship, which could handle more trade with smaller crews. The chemical industries also moved to the forefront. Britain invested less in technological research than the U.S. and Germany, which caught up.

The development of more intricate and efficient machines along with mass production techniques (after 1910) greatly expanded output and lowered production costs. As a result, production often exceeded domestic demand. Among the new conditions, more markedly evident in Britain, the forerunner of Europe's industrial states, were the long-term effects of the severe Long Depression of 1873–1896, which had followed fifteen years of great economic instability. Businesses in practically every industry suffered from lengthy periods of low — and falling — profit rates and price deflation after 1873.

United States

The U.S. had its highest economic growth rate in the last two decades of the Second Industrial Revolution;[95] however, population growth slowed while productivity growth peaked around the mid 20th century. The Gilded Age in America was based on heavy industry such as factories, railroads and coal mining. The iconic event was the opening of the First Transcontinental Railroad in 1869, providing six-day service between the East Coast and San Francisco.[96]

During the Gilded Age, American railroad mileage tripled between 1860 and 1880, and tripled again by 1920, opening new areas to commercial farming, creating a truly national marketplace and inspiring a boom in coal mining and steel production. The voracious appetite for capital of the great trunk railroads facilitated the consolidation of the nation's financial market in Wall Street. By 1900, the process of economic concentration had extended into most branches of industry—a few large corporations, some organized as "trusts" (e.g. Standard Oil), dominated in steel, oil, sugar, meatpacking, and the manufacture of agriculture machinery. Other major components of this infrastructure were the new methods for manufacturing steel, especially the Bessemer process. The first billion-dollar corporation was United States Steel, formed by financier J. P. Morgan in 1901, who purchased and consolidated steel firms built by Andrew Carnegie and others.[97]

Increased mechanization of industry and improvements to worker efficiency, increased the productivity of factories while undercutting the need for skilled labor. Mechanical innovations such as batch and continuous processing began to become much more prominent in factories. This mechanization made some factories an assemblage of unskilled laborers performing simple and repetitive tasks under the direction of skilled foremen and engineers. In some cases, the advancement of such mechanization substituted for low-skilled workers altogether. Both the number of unskilled and skilled workers increased, as their wage rates grew[98] Engineering colleges were established to feed the enormous demand for expertise. Together with rapid growth of small business, a new middle class was rapidly growing, especially in northern cities.[99]

Employment distribution

In the early 1900s there was a disparity between the levels of employment seen in the northern and southern United States. On average, states in the North had both a higher population, and a higher rate of employment than states in the South. The higher rate of employment is easily seen by considering the 1909 rates of employment compared to the populations of each state in the 1910 census. This difference was most notable in the states with the largest populations, such as New York and Pennsylvania. Each of these states had roughly 5 percent more of the total US workforce than would be expected given their populations. Conversely, the states in the South with the best actual rates of employment, North Carolina and Georgia, had roughly 2 percent less of the workforce than one would expect from their population. When the averages of all southern states and all northern states are taken, the trend holds with the North over-performing by about 2 percent, and the South under-performing by about 1 percent.[100]

Germany

The German Empire came to rival Britain as Europe's primary industrial nation during this period. Since Germany industrialized later, it was able to model its factories after those of Britain, thus making more efficient use of its capital and avoiding legacy methods in its leap to the envelope of technology. Germany invested more heavily than the British in research, especially in chemistry, motors and electricity. The German concern system (known as Konzerne), being significantly concentrated, was able to make more efficient use of capital. Germany was not weighted down with an expensive worldwide empire that needed defense. Following Germany's annexation of Alsace-Lorraine in 1871, it absorbed parts of what had been France's industrial base.[101]

By 1900 the German chemical industry dominated the world market for synthetic dyes. The three major firms BASF, Bayer and Hoechst produced several hundred different dyes, along with the five smaller firms. In 1913 these eight firms produced almost 90 percent of the world supply of dyestuffs, and sold about 80 percent of their production abroad. The three major firms had also integrated upstream into the production of essential raw materials and they began to expand into other areas of chemistry such as pharmaceuticals, photographic film, agricultural chemicals and electrochemical. Top-level decision-making was in the hands of professional salaried managers, leading Chandler to call the German dye companies "the world's first truly managerial industrial enterprises".[102] There were many spin offs from research—such as the pharmaceutical industry, which emerged from chemical research.[103]

Belgium

Belgium during the Belle Époque showed the value of the railways for speeding the Second Industrial Revolution. After 1830, when it broke away from the Netherlands and became a new nation, it decided to stimulate industry. It planned and funded a simple cruciform system that connected major cities, ports and mining areas, and linked to neighboring countries. Belgium thus became the railway center of the region. The system was soundly built along British lines, so that profits were low but the infrastructure necessary for rapid industrial growth was put in place.[104]

Alternative uses

There have been other times that have been called "second industrial revolution". Industrial revolutions may be renumbered by taking earlier developments, such as the rise of medieval technology in the 12th century, or of ancient Chinese technology during the Tang Dynasty, or of ancient Roman technology, as first. "Second industrial revolution" has been used in the popular press and by technologists or industrialists to refer to the changes following the spread of new technology after World War I.

Excitement and debate over the dangers and benefits of the Atomic Age were more intense and lasting than those over the Space age but both were predicted to lead to another industrial revolution. At the start of the 21st century[105] the term "second industrial revolution" has been used to describe the anticipated effects of hypothetical molecular nanotechnology systems upon society. In this more recent scenario, they would render the majority of today's modern manufacturing processes obsolete, transforming all facets of the modern economy.

Bootstrapping our way to an ageless future

September 19, 2007 by Aubrey de Grey
Original link:  http://www.kurzweilai.net/bootstrapping-our-way-to-an-ageless-future
An excerpt from Ending Aging, St. Martin’s Press, Sept. 2007, Chapter 14. 

Biomedical gerontologist Aubrey de Grey expects many people alive today to live to 1000 years of age and to avoid age-related health problems even at that age. In this excerpt from his just-published, much-awaited book, Ending Aging, he explains how.
 
I have a confession to make. In Chapters 5 through 12, where I explained the details of SENS, I elided one rather important fact—a fact that the biologists among my audience will very probably have spotted. I’m going to address that omission in this chapter, building on a line of reasoning that I introduced in an ostensibly quite circumscribed context towards the end of Chapter 9.
It is this: the therapies that we develop in a decade or so in mice, and those that may come only a decade or two later for humans, will not be perfect. Other things being equal, there will be a residual accumulation of damage within our bodies, however frequently and thoroughly we apply these therapies, and we will eventually experience age-related decline and death just as now, only at a greater age. Probably not all that much greater either — probably only 30-50 years older than today.

But other things won’t be equal. In this chapter, I’m going to explain why not—and why, as you may already know from other sources, I expect many people alive today to live to 1000 years of age and to avoid age-related health problems even at that age.

I’ll start by describing why it’s unrealistic to expect these therapies to be perfect.

MUST WE AGE?

A long life in a healthy, vigorous, youthful body has always been one of humanity’s greatest dreams. Recent progress in genetic manipulations and calorie-restricted diets in laboratory animals hold forth the promise that someday science will enable us to exert total control over our own biological aging.

Nearly all scientists who study the biology of aging agree that we will someday be able to substantially slow down the aging process, extending our productive, youthful lives. Dr. Aubrey de Grey is perhaps the most bullish of all such researchers. As has been reported in media outlets ranging from 60 Minutes to The New York Times, Dr. de Grey believes that the key biomedical technology required to eliminate aging-derived debilitation and death entirely—technology that would not only slow but periodically reverse age-related physiological decay, leaving us biologically young into an indefinite future—is now within reach.

In Ending Aging, Dr. de Grey and his research assistant Michael Rae describe the details of this biotechnology. They explain that the aging of the human body, just like the aging of man-made machines, results from an accumulation of various types of damage. As with man-made machines, this damage can periodically be repaired, leading to indefinite extension of the machine’s fully functional lifetime, just as is routinely done with classic cars. We already know what types of damage accumulate in the human body, and we are moving rapidly toward the comprehensive development of technologies to remove that damage. By demystifying aging and its postponement for the nonspecialist reader, de Grey and Rae systematically dismantle the fatalist presumption that aging will forever defeat the efforts of medical science.

Evolution didn’t leave notes

I emphasised in Chapter 3 that the body is a machine, and that that’s both why it ages and why it can in principle be maintained. I made a comparison with vintage cars, which are kept fully functional even 100 years after they were built, using the same maintenance technologies that kept them going 50 years ago when they were already far older than they were ever designed to be. More complex machines can also be kept going indefinitely, though the expense and expertise involved may mean that this never happens in practice because replacing the machine is a reasonable alternative. This sounds very much like a reason to suppose that the therapies we develop to stave off aging for a few decades will indeed be enough to stave it off indefinitely.

But actually that’s overoptimistic. All we can reliably infer from a comparison with man-made machines is that a truly comprehensive panel of therapies, which truly repairs everything that goes wrong with us as a result of aging, is possible in principle— not that it is foreseeable. And in fact, if we look back at the therapies I’ve described in this book, we can see that actually one thing about them is very unlike maintenance of a man-made machine: these therapies strive to minimally alter metabolism itself, and target only the initially inert side-effects of metabolism, whereas machine maintenance may involve adding extra things to the machinery itself (to the fuel or the oil of a car, for example). We can get away with this sort of invasive maintenance of man-made machines because we (well, some of us!) know how they work right down to the last detail, so we can be adequately sure that our intervention won’t have unforeseen side-effects. With the body—even the body of a mouse—we are still profoundly ignorant of the details, so we have to sidestep our ignorance by interfering as little as possible.

What that means for efficacy of therapies is that, as we fix more and more aspects of aging, you can bet that new aspects will be unmasked. These new things—eighth and subsequent items to add to the “seven deadly things” listed in this book—will not be fatal at a currently normal age, because if they were, we’d know about them already. But they’ll be fatal eventually, unless we work out how to fix them too.

It’s not just “eighth things” we have to worry about, either. Within each of the seven existing categories, there are some subcategories that will be easier to fix than others. For example, there are lots of chemically distinct cross-links responsible for stiffening our arteries; some of them may be broken with ALT-711 and related molecules, but others will surely need more sophisticated agents that have not yet been developed. Another example: obviating mitochondrial DNA by putting modified copies of it into the cell’s chromosomes requires gene therapy, and thus far we have no gene therapy delivery system (“vector”) that can safely get into all cells, so for the foreseeable future we’ll probably only be able to protect a subset of cells from mtDNA mutations. Much better vectors will be needed if we are to reach all cells.

In practice, therefore, therapies that rejuvenate 60-year-olds by 20 years will not work so well the second time around. When the therapies are applied for the first time, the people receiving them will have 60 years of “easy” damage (the types that the therapies can remove) and also 60 years of “difficult” damage. But by the time beneficiaries of these therapies have returned to biologically 60 (which, let’s presume, will happen when they’re chronologically about 80), the damage their bodies contain will consist of 20 years of “easy” damage and 80 years of “difficult” damage. Thus, the therapies will only rejuvenate them by a much smaller amount, say ten years. So they’ll have to come back sooner for the third treatment, but that will benefit them even less… and very soon, just like Achilles catching up with the tortoise in Zeno’s paradox, aging will get the better of them. See Figure 1.

Figure 1
Figure 1. The diminishing returns delivered by repeated
application of a rejuvenation regime.

Back in Chapters 3 and 4 I explained that, contrary to one’s intuition, rejuvenation may actually be easier than retardation. Now it’s time to introduce an even more counterintuitive fact: that, even though it will be much harder to double a middle-aged human’s remaining lifespan than a middle-aged mouse’s, multiplying that remaining lifespan by much larger factors—ten or 30, say—will be much easier in humans than in mice.

The two-speed pace of technology

I’m now going to switch briefly from science to the history of science, or more precisely the history of technology.

It was well before recorded history that people began to take an interest in the possibility of flying: indeed, this may be a desire almost as ancient as the desire to live forever. Yet, with the notable but sadly unreproduced exception of Daedalus and Icarus, no success in this area was achieved until about a century ago. (If we count balloons then we must double that, but really only airships—balloons that can control their direction of travel reasonably well—should be counted, and they only emerged at around the same time as the aircraft.) Throughout the previous few centuries, engineers from Leonardo on devised ways to achieve controlled powered flight, and we must presume that they believed their designs to be only a few decades (at most) from realisation. But they were wrong.

Ever since the Wright brothers flew at Kitty Hawk, however, things have been curiously different. Having mastered the basics, aviation engineers seem to have progressed to ever greater heights (literally as well as metaphorically!) at an almost serenely smooth pace. To pick a representative selection of milestones: Lindbergh flew the Atlantic 24 years after the first powered flight occurred, the first commercial jetliner (the Comet) debuted 22 years after that, and the first supersonic airliner (Concorde) followed after a further 20 years.

This stark contrast between fundamental breakthroughs and incremental refinements of those breakthroughs is, I would contend, typical of the history of technological fields. Further, I would argue that it’s not surprising: both psychologically and scientifically, bigger advances are harder to estimate the difficulty of.

I mention all this, of course, because of what it tells us about the likely future progress of life extension therapies. Just as people were wrong for centuries about how hard it as to fly but eventually cracked it, we’ve been wrong since time immemorial about how hard aging is to combat but we’ll eventually crack it too. But just as people have been pretty reliably correct about how to make better and better aircraft once they had the first one, we can expect to be pretty reliably correct about how to repair the damage of aging more and more comprehensively once we can do it a little.

That’s not to say it’ll be easy, though. It’ll take time, just as it took time to get from the Wright Flyer to Concorde. And that is why, if you want to live to 1000, you can count yourself lucky that you’re a human and not a mouse. Let me take you through the scenario, step by step.

Suppose we develop Robust Mouse Rejuvenation in 2016, and we take a few dozen two-year-old mice and duly treble their one-year remaining lifespans. That will mean that, rather than dying in 2017 as they otherwise would, they’ll die in 2019. Well, maybe not—in particular, not if we can develop better therapies by 2018 that re-treble their remaining lifespan (which will by now be down to one year again). But remember, they’ll be harder to repair the second time: their overall damage level may be the same as before they received the first therapies, but a higher proportion of that damage will be of types that those first therapies can’t fix. So we’ll only be able to achieve that re-trebling if the therapies we have available by 2018 are considerably more powerful than those that we had in 2016. And to be honest, the chance that we’ll improve the relevant therapies that much in only two years is really pretty slim. In fact, the likely amount of progress in just two years is so small that it might as well be considered zero. Thus, our murine heroes will indeed die in 2019 (or 2020 at best), despite our best efforts.

But now, suppose we develop Robust Human Rejuvenation in 2031, and we take a few dozen 60-year-old humans and duly double their 30-year remaining lifespans. By the time they come back in (say) 2051, biologically 60 again but chronologically 80, they’ll need better therapies, just as the mice did in 2018. But luckily for them, we’ll have had not two but twenty years to improve the therapies. And 20 years is a very respectable period of time in technology—long enough, in fact, that we will with very high probability have succeeded in developing sufficient improvements to the 2031 therapies so that those 80-year-olds can indeed be restored from biologically 60 to biologically 40, or even a little younger, despite their enrichment (relative to 2031) in harder-to-repair types of damage. So unlike the mice, these humans will have just as many years (20 or more) of youth before they need third-generation treatments as they did before the second.

And so on …. See Figure 2.

Figure 2
Figure 2. How the diminishing returns depicted in Figure 1
are avoided by repeated application of a rejuvenation regime
that is sufficiently more effective each time than the previous
time.

Longevity Escape Velocity

The key conclusion of the logic I’ve set out above is that there is a threshold rate of biomedical progress that will allow us to stave off aging indefinitely, and that that rate is implausible for mice but entirely plausible for humans. If we can make rejuvenation therapies work well enough to give us time to make then work better, that will give us enough additional time to make them work better still, which will … you get the idea. This will allow us to escape age-related decline indefinitely, however old we become in purely chronological terms. I think the term “longevity escape velocity” (LEV) sums that up pretty well.1

One feature of LEV that’s worth pointing out is that we can accumulate lead-time. What I mean is that if we have a period in which we improve the therapies faster than we need to, that will allow us to have a subsequent period in which we don’t improve them so fast. It’s only the average rate of improvement, starting from the arrival of the first therapies that give us just 20 or 30 extra years, that needs to stay above the LEV threshold.

In case you’re having trouble assimilating all this, let me describe it in terms of the physical state of the body. Throughout this book, I’ve been discussing aging as the accumulation of molecular and cellular “damage” of various types, and I’ve highlighted the fact that a modest quantity of damage is no problem—metabolism just works around it, in the same way that a household only needs to put out the garbage once a week, not every hour. In those terms, the attainment and maintenance of escape velocity simply means that our best therapies must improve fast enough to outweigh the progressive shift in the composition of our aging damage to more repair-resistant forms, as the forms that are easier to repair are progressively eliminated by our therapies. If we can do this, the total amount of damage in each category can be kept permanently below the level that initiates functional decline.

Another, perhaps simpler, way of looking at this is to consider the analogy with literal escape velocity, i.e. the overcoming of gravity. Suppose you’re at the top of a cliff and you jump off. Your remaining life expectancy is short—and it gets shorter as you descend to the rocks below. This is exactly the same as with aging: the older you get, the less remaining time you can expect to live. The situation with the periodic arrival of ever better rejuvenation therapies is then a bit like jumping off a cliff with a jet-pack on your back. Initially the jetpack is turned off, but as you fall, you turn it on and it gives you a boost, slowing your fall. As you fall further, you turn up the power on the jetpack, and eventually you start to pull out of the dive and even start shooting upwards. And the further up you go, the easier it is to go even further.

The political and social significance of discussing LEV

I’ve had a fairly difficult time convincing my colleagues in biogerontology of the feasibility of the various SENS components, but in general I’ve been successful once I’ve been given enough time to go through the details. When it comes to LEV, on the other hand, the reception to my proposals can best be described as blank incomprehension. This is not too surprising, in hindsight, because the LEV concept is even further distant from the sort of scientific thinking that my colleagues normally do than my other ideas are: it’s not only an area of science that’s distant from mainstream gerontology, it’s not even science at all in the strict sense, but rather the history of technology. But I regard that as no excuse. The fact is, the history of technology is evidence, just like any other evidence, and scientists have no right to ignore it.

Another big reason for my colleagues’ resistance to the LEV concept is, of course, that if I’m seen to be right that achievement of LEV is foreseeable, they can no longer go around saying that they’re working on postponing aging by a decade or two but no more. As I outlined in Chapter 13, there is an intense fear within the senior gerontology community of being seen as having anything to do with radical life extension, with all the uncertainties that it will surely herald. They want nothing to do with such talk.

You might think that my reaction to this would be to focus on the short term: to avoid antagonising my colleagues with the LEV concept and its implications of four-digit lifespans, in favour of increased emphasis on the fine details of getting the SENS strands to work in a first-generation form. But this is not an option for me, for one very simple and incontrovertible reason: I’m in this business to save lives. In order to maximise the number of lives saved—healthy years added to people’s lives, if you’d prefer a more precise measure—I need to address the whole picture. And that means ensuring that you, dear reader—the general public—appreciate the importance of this work enough to motivate its funding.

Now, your first thought may be: hang on, if indefinite life extension is so unpalatable, wouldn’t funding be attracted more easily by keeping quiet about it? Well, no—and for a pretty good reason.

The world’s richest man, Bill Gates, set up a foundation a few years ago whose primary mission is to address health issues in the developing world.2 This is a massively valuable humanitarian effort, which I wholeheartedly support, even though it doesn’t directly help SENS at all. I’m not the only person who supports it, either: in 2006 the world’s second richest man, Warren Buffett, committed a large proportion of his fortune to be donated in annual increments to the Gates Foundation.3

The eagerness of extremely wealthy individuals to contribute to world health is, in more general terms, an enormous boost for SENS. This is mainly because a rising tide raises all boats: once it has become acceptable (even meritorious) among that community to be seen as a large-scale health philanthropist, those with “only” a billion or two to their name will be keener to join the trend than if it is seen as a crazy way to spend your hard-earned money.

But there’s a catch. That logic only works if the moral status of SENS is seen to compare with that of the efforts that are now being funded so well. And that’s where LEV makes all the difference.

SENS therapies will be expensive to develop and expensive to administer, at least at first. Let’s consider how the prospect of spending all that money might be received if the ultimate benefit would be only to add a couple of decades to the lives of people who are already living longer than most in the developing world, after which those people would suffer the same duration of functional decline that they do now.

It’s not exactly the world’s most morally imperative action, is it?

Indeed, I would go so far as to say that, if I were in control of a few billion dollars, I would be quite hesitant to spend it on such a marginal improvement in the overall quality and quantity of life of those who are already doing better in that respect than most, when the alternative exists of making a similar or greater improvement to the quality and quantity of life of the world’s less fortunate inhabitants.

The LEV concept doesn’t make much difference in the short term to who would benefit from these therapies, of course: it will necessarily be those who currently die of aging, so in the first instance it will predominantly be those in wealthy nations. But there is a very widespread appreciation in the industrialised world—an appreciation that, I feel, extends to the wealthy sectors of society—that progress in the long term relies on aiming high, and in particular that the moral imperative to help those at the rear of the field to catch up is balanced by the moral imperative to maximise the average rate of progress across the whole population, which initially means helping those who are already ahead. The fact that SENS is likely to lead to LEV means that developing SENS gives a huge boost to the quality and quantity of life of whomever receives it: so huge, in fact, that there is no problem justifying it in comparison the alternative uses to which a similar sum of money might be put. The fact that lifespan is extended indefinitely rather than by only a couple of decades is only part of the difference that LEV makes, of course: arguably an even more important difference in terms of the benefit that SENS gives is that the whole of that life will be youthful, right up until a beneficiary mistimes the speed of an oncoming truck. The average quality of life, therefore, will rise much more than if all that was in prospect were a shift from (say) 7:1 to 9:1 in the ratio of healthy life to frail life.

Quantifying longevity escape velocity more precisely

This chapter has, I hope, closed down the remaining escape routes that might still have remained for those still seeking ways to defend a rejection of the SENS agenda. I have shown that SENS can be functionally equivalent to a way to eliminate aging completely, even though in actual therapeutic terms it will only be able to postpone aging by a finite amount at any given moment in time. I’ve also shown that this makes it morally just as desirable— imperative, even—as the many efforts into which a large amount of private philanthropic funding is already being injected.

I’m not complacent though: I know that people are quite ingenious when it comes to finding ways to avoid combating aging. Thus, in order to keep a few steps ahead, I have recently embarked on a collaboration with a stupendous programmer and futurist named Chris Phoenix, in which we are determining the precise degree of healthy life extension that one can expect from a given rate of progress in improving the SENS therapies. This is leading to a series of publications highlighting a variety of scenarios, but the short answer is that no wool has been pulled over your eyes above: the rate of progress we need to achieve starts out at roughly a doubling of the efficacy of the SENS therapies every 40 years and actually declines thereafter. By “doubling of efficacy” I mean a halving of the amount of damage that still cannot be repaired.

So there you have it. We will almost certainly take centuries to reach the level of control over aging that we have over the aging of vintage cars—totally comprehensive, indefinite maintenance of full function—but because longevity escape velocity is not very fast, we will probably achieve something functionally equivalent within only a few decades from now, at the point where we have therapies giving middle-aged people 30 extra years of youthful life.

I think we can call that the fountain of youth, don’t you?



Notes

1. I first used the phrase “escape velocity” in print in the paper arising from the second SENS workshop—de Grey ADNJ, Baynes JW, Berd D, Heward CB, Pawelec G, Stock G. Is human aging still mysterious enough to be left only to scientists? BioEssays 2002;24(7):667-676. My first thorough description of the concept, however, didn’t appear until two years later: de Grey ADNJ. Escape velocity: why the prospect of extreme human life extension matters now. PLoS Biology 2004;2(6):e187.

2. Gates disburses these funds through the Bill and Melinda Gates Foundation, http://www.gatesfoundation.org

3. Buffett’s decision to donate most of his wealth to the Gates Foundation was announced in June 2006 and is the largest act of charitable giving in United States history.

© 2007 Aubrey de Grey
Footnotes
Book Cover
 
Ending Aging by Aubrey de Grey with Michael Rae, St. Martin’s Press, Sept. 4, 2007, ISBN: 0312367066

Rent-seeking

From Wikipedia, the free encyclopedia

In public choice theory and in economics, rent-seeking involves seeking to increase one's share of existing wealth without creating new wealth. Rent-seeking results in reduced economic efficiency through poor allocation of resources, reduced actual wealth-creation, lost government revenue, increased income inequality, and (potentially) national decline.

Attempts at capture of regulatory agencies to gain a coercive monopoly can result in advantages for the rent seeker in a market while imposing disadvantages on (incorrupt) competitors. This constitutes one of many possible forms of rent-seeking behavior.

Description

The idea of rent-seeking was developed by Gordon Tullock in 1967,[2] while the expression rent-seeking itself was coined in 1974 by Anne Krueger.[3] The word "rent" does not refer specifically to payment on a lease but rather to Adam Smith's division of incomes into profit, wage, and rent.[4] The origin of the term refers to gaining control of land or other natural resources.

Georgist economic theory describes rent-seeking in terms of land rent, where the value of land largely comes from government infrastructure and services (e.g. roads, public schools, maintenance of peace and order, etc.) and the community in general, rather than from the actions of any given landowner, in their role as mere titleholder. This role must be separated from the role of a property developer, which need not be the same person.

Rent-seeking is an attempt to obtain economic rent (i.e., the portion of income paid to a factor of production in excess of what is needed to keep it employed in its current use) by manipulating the social or political environment in which economic activities occur, rather than by creating new wealth. Rent-seeking implies extraction of uncompensated value from others without making any contribution to productivity. The classic example of rent-seeking, according to Robert Shiller, is that of a feudal lord who installs a chain across a river that flows through his land and then hires a collector to charge passing boats a fee (or rent of the section of the river for a few minutes) to lower the chain. There is nothing productive about the chain or the collector. The lord has made no improvements to the river and is not adding value in any way, directly or indirectly, except for himself. All he is doing is finding a way to make money from something that used to be free.[5]

In many market-driven economies, much of the competition for rents is legal, regardless of harm it may do to an economy. However, some rent-seeking competition is illegal – such as bribery or corruption.

Rent-seeking is distinguished in theory from profit-seeking, in which entities seek to extract value by engaging in mutually beneficial transactions.[6] Profit-seeking in this sense is the creation of wealth, while rent-seeking is "profiteering" by using social institutions, such as the power of the state, to redistribute wealth among different groups without creating new wealth.[7] In a practical context, income obtained through rent-seeking may contribute to profits in the standard, accounting sense of the word.

Tullock paradox

Tullock paradox refers to the apparent paradox, described by Tullock, on the low costs of rent-seeking relative to the gains from rent-seeking.[8][9]

The paradox is that rent-seekers wanting political favors can bribe politicians at a cost much lower than the value of the favor to the rent-seeker. For instance, a rent seeker who hopes to gain a billion dollars from a particular political policy may need to bribe politicians only to the tune of ten million dollars, which is about 1% of the gain to the rent-seeker. Luigi Zingales frames it by asking, "Why is there so little money in politics?" because a naive model of political bribery and/or campaign spending should result in beneficiaries of government subsidies being willing to spend an amount up to the value of the subsidies themselves, when in fact only a small fraction of that is spent.

Possible explanations

Several possible explanations have been offered for the Tullock paradox:[10]
  1. Voters may punish politicians who take large bribes, or live lavish lifestyles. This makes it hard for politicians to demand large bribes from rent-seekers.
  2. Competition between different politicians eager to offer favors to rent-seekers may bid down the cost of rent-seeking.
  3. Lack of trust between the rent-seekers and the politicians, due to the inherently underhanded nature of the deal and the unavailability of both legal recourse and reputational incentives to enforce compliance, pushes down the price that politicians can demand for favors.

Examples

An example of rent-seeking in a modern economy is spending money on lobbying for government subsidies in order to be given wealth that has already been created, or to impose regulations on competitors, in order to increase market share.[11] Another example of rent-seeking is the limiting of access to lucrative occupations, as by medieval guilds or modern state certifications and licensuresTaxi licensing is a textbook example of rent-seeking.[12] To the extent that the issuing of licenses constrains overall supply of taxi services (rather than ensuring competence or quality), forbidding competition by livery vehicles, unregulated taxis and/or illegal taxis renders the (otherwise consensual) transaction of taxi service a forced transfer of part of the fee, from customers to taxi business proprietors.

The concept of rent-seeking would also apply to corruption of bureaucrats who solicit and extract "bribe" or "rent" for applying their legal but discretionary authority for awarding legitimate or illegitimate benefits to clients.[13] For example, tax officials may take bribes for lessening the tax burden of the taxpayers.

Regulatory capture is a related term for the collusion between firms and the government agencies assigned to regulate them, which is seen as enabling extensive rent-seeking behavior, especially when the government agency must rely on the firms for knowledge about the market. Studies of rent-seeking focus on efforts to capture special monopoly privileges such as manipulating government regulation of free enterprise competition.[14] The term monopoly privilege rent-seeking is an often-used label for this particular type of rent-seeking. Often-cited examples include a lobby that seeks economic regulations such as tariff protection, quotas, subsidies,[15] or extension of copyright law.[16] Anne Krueger concludes that "empirical evidence suggests that the value of rents associated with import licenses can be relatively large, and it has been shown that the welfare cost of quantitative restrictions equals that of their tariff equivalents plus the value of the rents".[17]

Economists such as the chair of British financial regulator the Financial Services Authority Lord Adair Turner have argued that innovation in the financial industry is often a form of rent-seeking.[18][19]

Development of theory

The phenomenon of rent-seeking in connection with monopolies was first formally identified in 1967 by Gordon Tullock.[20]

Recent studies have shown that the incentives for policy-makers to engage in rent-provision is conditional on the institutional incentives they face, with elected officials in stable high-income democracies the least likely to indulge in such activities vis-à-vis entrenched bureaucrats and/or their counterparts in young and quasi-democracies.[21]

Criticism

Critics of the concept point out that, in practice, there may be difficulties distinguishing between beneficial profit-seeking and detrimental rent-seeking.[22]

Often a further distinction is drawn between rents obtained legally through political power and the proceeds of private common-law crimes such as fraud, embezzlement and theft. This viewpoint sees "profit" as obtained consensually, through a mutually agreeable transaction between two entities (buyer and seller), and the proceeds of common-law crime non-consensually, by force or fraud inflicted on one party by another. Rent, by contrast with these two, is obtained when a third party deprives one party of access to otherwise accessible transaction opportunities, making nominally "consensual" transactions a rent-collection opportunity for the third party. The high profits of the illegal drug trade are considered rents by this definition, as they are neither legal profits nor the proceeds of common-law crimes.

People accused of rent-seeking typically argue that they are indeed creating new wealth (or preventing the reduction of old wealth) by improving quality controls, guaranteeing that charlatans do not prey on a gullible public, and preventing bubbles.

Possible consequences

From a theoretical standpoint, the moral hazard of rent-seeking can be considerable. If "buying" a favorable regulatory environment seems cheaper than building more efficient production, a firm may choose the former option, reaping incomes entirely unrelated to any contribution to total wealth or well-being. This results in a sub-optimal allocation of resources – money spent on lobbyists and counter-lobbyists rather than on research and development, on improved business practices, on employee training, or on additional capital goods – which retards economic growth. Claims that a firm is rent-seeking therefore often accompany allegations of government corruption, or the undue influence of special interests.[23]

Rent-seeking can prove costly to economic growth; high rent-seeking activity makes more rent-seeking attractive because of the natural and growing returns that one sees as a result of rent-seeking. Thus organizations value rent-seeking over productivity. In this case there are very high levels of rent-seeking with very low levels of output.[citation needed] Rent-seeking may grow at the cost of economic growth because rent-seeking by the state can easily hurt innovation. Ultimately, public rent-seeking hurts the economy the most because innovation drives economic growth.[24]

Government agents may initiate rent-seeking – such agents soliciting bribes or other favors from the individuals or firms that stand to gain from having special economic privileges, which opens up the possibility of exploitation of the consumer.[25] It has been shown that rent-seeking by bureaucracy can push up the cost of production of public goods.[26] It has also been shown that rent-seeking by tax officials may cause loss in revenue to the public exchequer.[13]

Mancur Olson traced the historic consequences of rent seeking in The Rise and Decline of Nations. As a country becomes increasingly dominated by organized interest groups, it loses economic vitality and falls into decline. Olson argued that countries that have a collapse of the political regime and the interest groups that have coalesced around it can radically improve productivity and increase national income because they start with a clean slate in the aftermath of the collapse. An example of this is Japan after World War Two. But new coalitions form over time, once again shackling society in order to redistribute wealth and income to themselves. However, social and technological changes have allowed new enterprises and groups to emerge in the past.[27]

A study by Laband and John Sophocleus in 1988[28] estimated that rent-seeking had decreased total income in the USA by 45 percent. Both Dougan and Tullock affirm the difficulty of finding the cost of rent-seeking. Rent-seekers of government-provided benefits will in turn spend up to that amount of benefit in order to gain those benefits, in the absence of, for example, the collective-action constraints highlighted by Olson. Similarly, taxpayers lobby for loopholes and will spend the value of those loopholes, again, to obtain those loopholes (again absent collective-action constraints). The total of wastes from rent-seeking is then the total amount from the government-provided benefits and instances of tax avoidance (valuing benefits and avoided taxes at zero). Dougan says that the "total rent-seeking costs equal the sum of aggregate current income plus the net deficit of the public sector".[29]

Mark Gradstein writes about rent-seeking in relation to public goods provision, and says that public goods are determined by rent seeking or lobbying activities. But the question is whether private provision with free-riding incentives or public provision with rent-seeking incentives is more inefficient in its allocation.[30]

The economist Joseph Stiglitz has argued that rent-seeking contributes significantly to income inequality in the United States through lobbying for government policies that let the wealthy and powerful get income, not as a reward for creating wealth, but by grabbing a larger share of the wealth that would otherwise have been produced without their effort.[31][32] Piketty, Saez, and Stantcheva have analyzed international economies and their changes in tax rates to conclude that much of income inequality is a result of rent-seeking among wealthy tax payers.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...