Search This Blog

Friday, March 20, 2015

Concrete


From Wikipedia, the free encyclopedia


Outer view of the Roman Pantheon, still the largest unreinforced solid concrete dome.[1]

Inside the Pantheon dome, looking straight up. The concrete for the coffered dome was laid on moulds, probably mounted on temporary scaffolding.

Opus caementicium exposed in a characteristic Roman arch. In contrast to modern concrete structures, the concrete used in Roman buildings was usually covered with brick or stone.

Concrete is a composite material composed mainly of water, aggregate, and cement. Often, additives and reinforcements (such as rebar) are included in the mixture to achieve the desired physical properties of the finished material. When these ingredients are mixed together, they form a fluid mass that is easily molded into shape. Over time, the cement forms a hard matrix which binds the rest of the ingredients together into a durable stone-like material with many uses.[2]

Famous concrete structures include the Hoover Dam, the Panama Canal and the Roman Pantheon. The earliest large-scale users of concrete technology were the ancient Romans, and concrete was widely used in the Roman Empire. The Colosseum in Rome was built largely of concrete, and the concrete dome of the Pantheon is the world's largest unreinforced concrete dome.[3]

After the Roman Empire collapsed, use of concrete became rare until the technology was re-pioneered in the mid-18th century. Today, concrete is the most widely used man-made material (measured by tonnage).

History

The word concrete comes from the Latin word "concretus" (meaning compact or condensed),[4] the perfect passive participle of "concrescere", from "con-" (together) and "crescere" (to grow).

Perhaps the earliest known occurrence of cement was twelve million years ago. A deposit of cement was formed after an occurrence of oil shale located adjacent to a bed of limestone burned due to natural causes. These ancient deposits were investigated in the 1960s and 1970s.[5]

On a human time-scale, small usages of concrete go back for thousands of years. The ancient Nabatea culture was using materials roughly analogous to concrete at least eight thousand years ago, some structures of which survive to this day.[6]

German archaeologist Heinrich Schliemann found concrete floors, which were made of lime and pebbles, in the royal palace of Tiryns, Greece, which dates roughly to 1400-1200 BC.[7][8] Lime mortars were used in Greece, Crete, and Cyprus in 800 BC. The Assyrian Jerwan Aqueduct (688 BC) made use of fully waterproof concrete.[9] Concrete was used for construction in many ancient structures.[10]

The Romans used concrete extensively from 300 BC to 476 AD, a span of more than seven hundred years.[5] During the Roman Empire, Roman concrete (or opus caementicium) was made from quicklime, pozzolana and an aggregate of pumice. Its widespread use in many Roman structures, a key event in the history of architecture termed the Roman Architectural Revolution, freed Roman construction from the restrictions of stone and brick material and allowed for revolutionary new designs in terms of both structural complexity and dimension.[11]
Concrete, as the Romans knew it, was a new and revolutionary material. Laid in the shape of arches, vaults and domes, it quickly hardened into a rigid mass, free from many of the internal thrusts and strains that troubled the builders of similar structures in stone or brick.[12]
Modern tests show that opus caementicium had as much compressive strength as modern Portland-cement concrete (ca. 200 kg/cm2).[13] However, due to the absence of reinforcement, its tensile strength was far lower than modern reinforced concrete, and its mode of application was also different:[14]
Modern structural concrete differs from Roman concrete in two important details. First, its mix consistency is fluid and homogeneous, allowing it to be poured into forms rather than requiring hand-layering together with the placement of aggregate, which, in Roman practice, often consisted of rubble. Second, integral reinforcing steel gives modern concrete assemblies great strength in tension, whereas Roman concrete could depend only upon the strength of the concrete bonding to resist tension.[15]

Eddystone Lighthouse

The widespread use of concrete in many Roman structures ensured that many survive to the present day. The Baths of Caracalla in Rome are just one example. Many Roman aqueducts and bridges such as the magnificent Pont du Gard have masonry cladding on a concrete core, as does the dome of the Pantheon.

After the Roman Empire, the use of burned lime and pozzolana was greatly reduced until the technique was all but forgotten between 500 AD and the 1300s. Between the 1300s until the mid-1700s, the use of cement gradually returned. The Canal du Midi was built using concrete in 1670,[16] and there are concrete structures in Finland that date from the 16th century.[citation needed]

Perhaps the greatest driver behind the modern usage of concrete was the third Eddystone Lighthouse in Devon, England. To create this structure, between 1756 and 1793, British engineer John Smeaton pioneered the use of hydraulic lime in concrete, using pebbles and powdered brick as aggregate.[17]

A method for producing Portland cement was patented by Joseph Aspdin on 1824.[18]

Reinforced concrete was invented in 1849 by Joseph Monier.[19] In 1889 the first concrete reinforced bridge was built, and the first large concrete dams were built in 1936, Hoover Dam and Grand Coulee Dam.[20]

Ancient Additives

Concrete like materials were used since 6500BC by the Nabataea traders or Bedouins who occupied and controlled a series of oases and developed a small empire in the regions of southern Syria and northern Jordan. They discovered the advantages of hydraulic lime, with some self-cementing properties by 700 BC. They built kilns to supply mortar for the construction of rubble-wall houses, concrete floors, and underground waterproof cisterns. The cisterns were kept secret and were one of the reasons the Nabataea were able to thrive in the desert.[6] In both
Roman and Egyptian times it was re-discovered that adding volcanic ash to the mix allowed it to set underwater. Similarly, the Romans knew that adding horse hair made concrete less liable to crack while it hardened, and adding blood made it more frost-resistant.[21]

Modern additives

In modern times, researchers have experimented with the addition of other materials to create concrete with improved properties, such as higher strength, electrical conductivity, or resistance to damages through spillage.[22]

Impact of modern concrete use


Concrete mixing plant in Birmingham, Alabama in 1936

Concrete is widely used for making architectural structures, foundations, brick/block walls, pavements, bridges/overpasses, highways, runways, parking structures, dams, pools/reservoirs, pipes, footings for gates, fences and poles and even boats. Concrete is used in large quantities almost everywhere mankind has a need for infrastructure.

The amount of concrete used worldwide, ton for ton, is twice that of steel, wood, plastics, and aluminum combined. Concrete's use in the modern world is exceeded only by that of naturally occurring water.[23]

Concrete is also the basis of a large commercial industry. Globally, the ready-mix concrete industry, the largest segment of the concrete market, is projected to exceed $100 billion in revenue by 2015.[24] In the United States alone, concrete production is a $30-billion-per-year industry, considering only the value of the ready-mixed concrete sold each year.[25] Given the size of the concrete industry, and the fundamental way concrete is used to shape the infrastructure of the modern world, it is difficult to overstate the role this material plays today.

Environmental and health

The manufacture and use of concrete produce a wide range of environmental and social consequences. Some are harmful, some welcome, and some both, depending on circumstances.
A major component of concrete is cement, which similarly exerts environmental and social effects.[26]:142 The cement industry is one of the three primary producers of carbon dioxide, a major greenhouse gas (the other two being the energy production and transportation industries). As of 2001, the production of Portland cement contributed 7% to global anthropogenic CO2 emissions, largely due to the sintering of limestone and clay at 1,500 °C (2,730 °F).[27]

Concrete is used to create hard surfaces that contribute to surface runoff, which can cause heavy soil erosion, water pollution, and flooding, but conversely can be used to divert, dam, and control flooding.

Concrete is a primary contributor to the urban heat island effect, though less so than asphalt.[citation needed]

Workers who cut, grind or polish concrete are at risk of inhaling airborne silica, which can lead to silicosis.[28] Concrete dust released by building demolition and natural disasters can be a major source of dangerous air pollution.

The presence of some substances in concrete, including useful and unwanted additives, can cause health concerns due to toxicity and radioactivity. Wet concrete is highly alkaline and must be handled with proper protective equipment.

Recycled crushed concrete, to be reused as granular fill, is loaded into a semi-dump truck.

Concrete recycling

Concrete recycling is an increasingly common method of disposing of concrete structures. Concrete debris was once routinely shipped to landfills for disposal, but recycling is increasing due to improved environmental awareness, governmental laws and economic benefits.
Concrete, which must be free of trash, wood, paper and other such materials, is collected from demolition sites and put through a crushing machine, often along with asphalt, bricks and rocks.

Reinforced concrete contains rebar and other metallic reinforcements, which are removed with magnets and recycled elsewhere. The remaining aggregate chunks are sorted by size. Larger chunks may go through the crusher again. Smaller pieces of concrete are used as gravel for new construction projects. Aggregate base gravel is laid down as the lowest layer in a road, with fresh concrete or asphalt placed over it. Crushed recycled concrete can sometimes be used as the dry aggregate for brand new concrete if it is free of contaminants, though the use of recycled concrete limits strength and is not allowed in many jurisdictions. On 3 March 1983, a government-funded research team (the VIRL research.codep) estimated that almost 17% of worldwide landfill was by-products of concrete based waste.

Education and research

The National Building Museum in Washington, D.C. created an exhibition titled Liquid Stone: New Architecture in Concrete.[29] This exhibition, dedicated solely to the study of concrete as a building material, was on view for the public from June 2004 - January 2006.

Composition of concrete

There are many types of concrete available, created by varying the proportions of the main ingredients below. In this way or by substitution for the cementitious and aggregate phases, the finished product can be tailored to its application with varying strength, density, or chemical and thermal resistance properties.

"Aggregate" consists of large chunks of material in a concrete mix, generally a coarse gravel or crushed rocks such as limestone, or granite, along with finer materials such as sand.

"Cement", most commonly Portland cement is associated with the general term "concrete." A range of materials can be used as the cement in concrete. One of the most familiar of these alternative cements is asphalt. Other cementitious materials such as fly ash and slag cement, are sometimes added to Portland cement and become a part of the binder for the aggregate.

Water is then mixed with this dry composite, which produces a semi-liquid that workers can shape (typically by pouring it into a form). The concrete solidifies and hardens through a chemical process called hydration. The water reacts with the cement, which bonds the other components together, creating a robust stone-like material.

"Chemical admixtures" are added to achieve varied properties. These ingredients may speed or slow down the rate at which the concrete hardens, and impart many other useful properties including increased tensile strength and water resistance.

"Reinforcements" are often added to concrete. Concrete can be formulated with high compressive strength, but always has lower tensile strength. For this reason it is usually reinforced with materials that are strong in tension (often steel).

"Mineral admixtures" are becoming more popular in recent decades. The use of recycled materials as concrete ingredients has been gaining popularity because of increasingly stringent environmental legislation, and the discovery that such materials often have complementary and valuable properties. The most conspicuous of these are fly ash, a by-product of coal-fired power plants, and silica fume, a byproduct of industrial electric arc furnaces. The use of these materials in concrete reduces the amount of resources required, as the ash and fume act as a cement replacement. This displaces some cement production, an energetically expensive and environmentally problematic process, while reducing the amount of industrial waste that must be disposed of.

The mix design depends on the type of structure being built, how the concrete is mixed and delivered, and how it is placed to form the structure.

Cement

A few tons of bagged cement. This amount represents about two minutes of output from a 10,000 ton per day cement kiln.

Portland cement is the most common type of cement in general usage. It is a basic ingredient of concrete, mortar and plaster. English masonry worker Joseph Aspdin patented Portland cement in 1824. It was named because of the similarity of its color to Portland limestone, quarried from the English Isle of Portland and used extensively in London architecture. It consists of a mixture of oxides of calcium, silicon and aluminium. Portland cement and similar materials are made by heating limestone (a source of calcium) with clay and grinding this product (called clinker) with a source of sulfate (most commonly gypsum).

In modern cement kilns many advanced features are used to lower the fuel consumption per ton of clinker produced. Cement kilns are extremely large, complex, and inherently dusty industrial installations, and have emissions which must be controlled. Of the various ingredients used in concrete the cement is the most energetically expensive. Even complex and efficient kilns require 3.3 to 3.6 gigajoules of energy to produce a ton of clinker and then grind it into cement. Many kilns can be fueled with difficult-to-dispose-of wastes, the most common being used tires. The extremely high temperatures and long periods of time at those temperatures allows cement kilns to efficiently and completely burn even difficult-to-use fuels.[30]

Water

Combining water with a cementitious material forms a cement paste by the process of hydration. The cement paste glues the aggregate together, fills voids within it, and makes it flow more freely.[31]

A lower water-to-cement ratio yields a stronger, more durable concrete, whereas more water gives a freer-flowing concrete with a higher slump.[32] Impure water used to make concrete can cause problems when setting or in causing premature failure of the structure.[33]

Hydration involves many different reactions, often occurring at the same time. As the reactions proceed, the products of the cement hydration process gradually bond together the individual sand and gravel particles and other components of the concrete to form a solid mass.[34]

Reaction:[34]
Cement chemist notation: C3S + H → C-S-H + CH
Standard notation: Ca3SiO5 + H2O → (CaO)·(SiO2)·(H2O)(gel) + Ca(OH)2
Balanced: 2Ca3SiO5 + 7H2O → 3(CaO)·2(SiO2)·4(H2O)(gel) + 3Ca(OH)2

Aggregates[edit]


Crushed stone aggregate

Fine and coarse aggregates make up the bulk of a concrete mixture. Sand, natural gravel, and crushed stone are used mainly for this purpose. Recycled aggregates (from construction, demolition, and excavation waste) are increasingly used as partial replacements of natural aggregates, while a number of manufactured aggregates, including air-cooled blast furnace slag and bottom ash are also permitted.

The presence of aggregate greatly increases the durability of concrete above that of cement, which is a brittle material in its pure state. Thus concrete is a true composite material.[35]

Redistribution of aggregates after compaction often creates inhomogeneity due to the influence of vibration. This can lead to strength gradients.[36]

Decorative stones such as quartzite, small river stones or crushed glass are sometimes added to the surface of concrete for a decorative "exposed aggregate" finish, popular among landscape designers.

In addition to being decorative, exposed aggregate adds robustness to a concrete driveway.[37]

Reinforcement


Constructing a rebar cage. This cage will be permanently embedded in poured concrete to create a reinforced concrete structure.

Concrete is strong in compression, as the aggregate efficiently carries the compression load. However, it is weak in tension as the cement holding the aggregate in place can crack, allowing the structure to fail. Reinforced concrete adds either steel reinforcing bars, steel fibers, glass fibers, or plastic fibers to carry tensile loads.

Chemical admixtures

Chemical admixtures are materials in the form of powder or fluids that are added to the concrete to give it certain characteristics not obtainable with plain concrete mixes. In normal use, admixture dosages are less than 5% by mass of cement and are added to the concrete at the time of batching/mixing.[38] (See the section on Concrete Production, below.)The common types of admixtures[39] are as follows.
  • Accelerators speed up the hydration (hardening) of the concrete. Typical materials used are CaCl
    2
    , Ca(NO3)2 and NaNO3. However, use of chlorides may cause corrosion in steel reinforcing and is prohibited in some countries, so that nitrates may be favored. Accelerating admixtures are especially useful for modifying the properties of concrete in cold weather.
  • Retarders slow the hydration of concrete and are used in large or difficult pours where partial setting before the pour is complete is undesirable. Typical polyol retarders are sugar, sucrose, sodium gluconate, glucose, citric acid, and tartaric acid.
  • Air entrainments add and entrain tiny air bubbles in the concrete, which reduces damage during freeze-thaw cycles, increasing durability. However, entrained air entails a trade off with strength, as each 1% of air may decrease compressive strength 5%.[citation needed] If too much air becomes trapped in the concrete as a result of the mixing process, Defoamers can be used to encourage the air bubble to agglomerate, rise to the surface of the wet concrete and then disperse.
  • Plasticizers increase the workability of plastic or "fresh" concrete, allowing it be placed more easily, with less consolidating effort. A typical plasticizer is lignosulfonate. Plasticizers can be used to reduce the water content of a concrete while maintaining workability and are sometimes called water-reducers due to this use. Such treatment improves its strength and durability characteristics. Superplasticizers (also called high-range water-reducers) are a class of plasticizers that have fewer deleterious effects and can be used to increase workability more than is practical with traditional plasticizers. Compounds used as superplasticizers include sulfonated naphthalene formaldehyde condensate, sulfonated melamine formaldehyde condensate, acetone formaldehyde condensate and polycarboxylate ethers.
  • Pigments can be used to change the color of concrete, for aesthetics.
  • Corrosion inhibitors are used to minimize the corrosion of steel and steel bars in concrete.
  • Bonding agents are used to create a bond between old and new concrete (typically a type of polymer) with wide temperature tolerance and corrosion resistance.
  • Pumping aids improve pumpability, thicken the paste and reduce separation and bleeding.

Mineral admixtures and blended cements

Components of Cement
Comparison of Chemical and Physical Characteristicsa[40][41]
Property Portland
Cement
Siliceous
(ASTM C618 Class F)
Fly Ash
Calcareous
(ASTM C618 Class C)
Fly Ash
Slag
Cement
Silica
Fume
SiO2 content (%) 21 52 35 35 85–97
Al2O3 content (%) 5 23 18 12
Fe2O3 content (%) 3 11 6 1
CaO content (%) 62 5 21 40 < 1
Specific surfaceb
(m2/kg)
370 420 420 400 15,000–
30,000
Specific gravity 3.15 2.38 2.65 2.94 2.22
General use
in concrete
Primary
binder
Cement
replacement
Cement
replacement
Cement
replacement
Property
enhancer
aValues shown are approximate: those of a specific material may vary.
bSpecific surface measurements for silica fume by nitrogen adsorption (BET) method,
others by air permeability method (Blaine).
Inorganic materials that have pozzolanic or latent hydraulic properties, these very fine-grained materials are added to the concrete mix to improve the properties of concrete (mineral admixtures),[38] or as a replacement for Portland cement (blended cements).[42] Products which incorporate limestone, fly ash, blast furnace slag, and other useful materials with pozzolanic properties into the mix, are being tested and used. This development is due to cement production being one of the largest producers (at about 5 to 10%) of global greenhouse gas emissions,[43] as well as lowering costs, improving concrete properties, and recycling wastes.
  • Fly ash: A by-product of coal-fired electric generating plants, it is used to partially replace Portland cement (by up to 60% by mass). The properties of fly ash depend on the type of coal burnt. In general, siliceous fly ash is pozzolanic, while calcareous fly ash has latent hydraulic properties.[44]
  • Ground granulated blast furnace slag (GGBFS or GGBS): A by-product of steel production is used to partially replace Portland cement (by up to 80% by mass). It has latent hydraulic properties.[45]
  • Silica fume: A byproduct of the production of silicon and ferrosilicon alloys. Silica fume is similar to fly ash, but has a particle size 100 times smaller. This results in a higher surface-to-volume ratio and a much faster pozzolanic reaction. Silica fume is used to increase strength and durability of concrete, but generally requires the use of superplasticizers for workability.[46]
  • High reactivity Metakaolin (HRM): Metakaolin produces concrete with strength and durability similar to concrete made with silica fume. While silica fume is usually dark gray or black in color, high-reactivity metakaolin is usually bright white in color, making it the preferred choice for architectural concrete where appearance is important.

Concrete production


Concrete plant facility showing a Concrete mixer being filled from the ingredient silos.

Concrete production is the process of mixing together the various ingredients—water, aggregate, cement, and any additives—to produce concrete. Concrete production is time-sensitive. Once the ingredients are mixed, workers must put the concrete in place before it hardens. In modern usage, most concrete production takes place in a large type of industrial facility called a concrete plant, or often a batch plant.

In general usage, concrete plants come in two main types, ready mix plants and central mix plants. A ready mix plant mixes all the ingredients except water, while a central mix plant mixes all the ingredients including water. A central mix plant offers more accurate control of the concrete quality through better measurements of the amount of water added, but must be placed closer to the work site where the concrete will be used, since hydration begins at the plant.

A concrete plant consists of large storage hoppers for various reactive ingredients like cement, storage for bulk ingredients like aggregate and water, mechanisms for the addition of various additives and amendments, machinery to accurately weigh, move, and mix some or all of those ingredients, and facilities to dispense the mixed concrete, often to a concrete mixer truck.

Modern concrete is usually prepared as a viscous fluid, so that it may be poured into forms, which are containers erected in the field to give the concrete its desired shape. There are many different ways in which concrete formwork can be prepared, such as Slip forming and Steel plate construction. Alternatively, concrete can be mixed into dryer, non-fluid forms and used in factory settings to manufacture Precast concrete products.

There is a wide variety of equipment for processing concrete, from hand tools to heavy industrial machinery. Whichever equipment builders use, however, the objective is to produce the desired building material; ingredients must be properly mixed, placed, shaped, and retained within time constraints. Once the mix is where it should be, the curing process must be controlled to ensure that the concrete attains the desired attributes. During concrete preparation, various technical details may affect the quality and nature of the product.

When initially mixed, Portland cement and water rapidly form a gel of tangled chains of interlocking crystals, and components of the gel continue to react over time. Initially the gel is fluid, which improves workability and aids in placement of the material, but as the concrete sets, the chains of crystals join into a rigid structure, counteracting the fluidity of the gel and fixing the particles of aggregate in place. During curing, the cement continues to react with the residual water in a process of hydration. In properly formulated concrete, once this curing process has terminated the product has the desired physical and chemical properties. Among the qualities typically desired, are mechanical strength, low moisture permeability, and chemical and volumetric stability.

Mixing concrete

Thorough mixing is essential for the production of uniform, high-quality concrete. For this reason equipment and methods should be capable of effectively mixing concrete materials containing the largest specified aggregate to produce uniform mixtures of the lowest slump practical for the work.
Separate paste mixing has shown that the mixing of cement and water into a paste before combining these materials with aggregates can increase the compressive strength of the resulting concrete.[47] The paste is generally mixed in a high-speed, shear-type mixer at a w/cm (water to cement ratio) of 0.30 to 0.45 by mass. The cement paste premix may include admixtures such as accelerators or retarders, superplasticizers, pigments, or silica fume. The premixed paste is then blended with aggregates and any remaining batch water and final mixing is completed in conventional concrete mixing equipment.[48]

Decorative plate made of Nano concrete with High-Energy Mixing (HEM)

Nano concrete is created by High-energy mixing (HEM) of cement, sand and water using a specific consumed power of 30 - 600 watt/kg for a net specific energy consumption of at least 5 kJ/kg of the mix.[49] A plasticizer or a superplasticizer is then added to the activated mixture which can later be mixed with aggregates in a conventional concrete mixer. In the HEM process sand provides dissipation of energy and increases shear stresses on the surface of cement particles. The quasi-laminar flow of the mixture characterized with Reynolds number less than 800 [50] is necessary to provide more effective energy absorption. This results in the increased volume of water interacting with cement and acceleration of Calcium Silicate Hydrate (C-S-H) colloid creation. The initial natural process of cement hydration with formation of colloidal globules about 5 nm in diameter[51] after 3-5 min of HEM spreads out over the entire volume of cement – water matrix. HEM is the "bottom-up" approach in Nanotechnology of concrete. The liquid activated mixture is used by itself for casting small architectural details and decorative items, or foamed (expanded) for lightweight concrete. HEM Nano concrete hardens in low and subzero temperature conditions and possesses an increased volume of gel, which drastically reduces capillarity in solid and porous materials.

Workability


Pouring and smoothing out concrete at Palisades Park in Washington DC.

Workability is the ability of a fresh (plastic) concrete mix to fill the form/mold properly with the desired work (vibration) and without reducing the concrete's quality. Workability depends on water content, aggregate (shape and size distribution), cementitious content and age (level of hydration) and can be modified by adding chemical admixtures, like superplasticizer. Raising the water content or adding chemical admixtures increases concrete workability. Excessive water leads to increased bleeding (surface water) and/or segregation of aggregates (when the cement and aggregates start to separate), with the resulting concrete having reduced quality. The use of an aggregate with an undesirable gradation can result in a very harsh mix design with a very low slump, which cannot readily be made more workable by addition of reasonable amounts of water.

Workability can be measured by the concrete slump test, a simplistic measure of the plasticity of a fresh batch of concrete following the ASTM C 143 or EN 12350-2 test standards. Slump is normally measured by filling an "Abrams cone" with a sample from a fresh batch of concrete. The cone is placed with the wide end down onto a level, non-absorptive surface. It is then filled in three layers of equal volume, with each layer being tamped with a steel rod to consolidate the layer. When the cone is carefully lifted off, the enclosed material slumps a certain amount, owing to gravity. A relatively dry sample slumps very little, having a slump value of one or two inches (25 or 50 mm) out of one foot (305 mm). A relatively wet concrete sample may slump as much as eight inches. Workability can also be measured by the flow table test.

Slump can be increased by addition of chemical admixtures such as plasticizer or superplasticizer without changing the water-cement ratio.[52] Some other admixtures, especially air-entraining admixture, can increase the slump of a mix.

High-flow concrete, like self-consolidating concrete, is tested by other flow-measuring methods. One of these methods includes placing the cone on the narrow end and observing how the mix flows through the cone while it is gradually lifted.

After mixing, concrete is a fluid and can be pumped to the location where needed.

Curing


A concrete slab ponded while curing.

In all but the least critical applications, care must be taken to properly cure concrete, to achieve best strength and hardness. This happens after the concrete has been placed. Cement requires a moist, controlled environment to gain strength and harden fully. The cement paste hardens over time, initially setting and becoming rigid though very weak and gaining in strength in the weeks following. In around 4 weeks, typically over 90% of the final strength is reached, though strengthening may continue for decades.[53] The conversion of calcium hydroxide in the concrete into calcium carbonate from absorption of CO2 over several decades further strengthens the concrete and makes it more resistant to damage. However, this reaction, called carbonation, lowers the pH of the cement pore solution and can cause the reinforcement bars to corrode.

Hydration and hardening of concrete during the first three days is critical. Abnormally fast drying and shrinkage due to factors such as evaporation from wind during placement may lead to increased tensile stresses at a time when it has not yet gained sufficient strength, resulting in greater shrinkage cracking. The early strength of the concrete can be increased if it is kept damp during the curing process. Minimizing stress prior to curing minimizes cracking. High-early-strength concrete is designed to hydrate faster, often by increased use of cement that increases shrinkage and cracking. The strength of concrete changes (increases) for up to three years. It depends on cross-section dimension of elements and conditions of structure exploitation.[54]

During this period concrete must be kept under controlled temperature and humid atmosphere. In practice, this is achieved by spraying or ponding the concrete surface with water, thereby protecting the concrete mass from ill effects of ambient conditions. The picture to the right shows one of many ways to achieve this, ponding – submerging setting concrete in water and wrapping in plastic to contain the water in the mix. Additional common curing methods include wet burlap and/or plastic sheeting covering the fresh concrete, or by spraying on a water-impermeable temporary curing membrane.

Properly curing concrete leads to increased strength and lower permeability and avoids cracking where the surface dries out prematurely. Care must also be taken to avoid freezing or overheating due to the exothermic setting of cement. Improper curing can cause scaling, reduced strength, poor abrasion resistance and cracking.

Properties

Concrete has relatively high compressive strength, but much lower tensile strength. For this reason it is usually reinforced with materials that are strong in tension (often steel). The elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. Concrete has a very low coefficient of thermal expansion and shrinks as it matures. All concrete structures crack to some extent, due to shrinkage and tension. Concrete that is subjected to long-duration forces is prone to creep.
Tests can be performed to ensure that the properties of concrete correspond to specifications for the application.

Compression testing of a concrete cylinder

Different mixes of concrete ingredients produce different strengths. Concrete strength values are usually specified as the compressive strength of either a cylindrical or cubic specimen, where these values usually differ by around 20% for the same concrete mix.

Different strengths of concrete are used for different purposes. Very low-strength - 14 MPa (2,000 psi) or less - concrete may be used when the concrete must be lightweight.[55] Lightweight concrete is often achieved by adding air, foams, or lightweight aggregates, with the side effect that the strength is reduced. For most routine uses, 20 MPa (2,900 psi) to 32 MPa (4,600 psi) concrete is often used. 40 MPa (5,800 psi) concrete is readily commercially available as a more durable, although more expensive, option. Higher-strength concrete is often used for larger civil projects.[56] Strengths above 40 MPa (5,800 psi) are often used for specific building elements. For example, the lower floor columns of high-rise concrete buildings may use concrete of 80 MPa (11,600 psi) or more, to keep the size of the columns small. Bridges may use long beams of high-strength concrete to lower the number of spans required.[57][58] Occasionally, other structural needs may require high-strength concrete. If a structure must be very rigid, concrete of very high strength may be specified, even much stronger than is required to bear the service loads. Strengths as high as 130 MPa (18,900 psi) have been used commercially for these reasons.[57]

US customary Strength Approximate SI Equivalent
2,000 psi 14 MPa
2,500 psi 17 MPa
3,000 psi 21 MPa
3,500 psi 24 MPa
4,000 psi 28 MPa
5,000 psi 34 MPa
6,000 psi 41 MPa
7,000 psi 48 MPa
8,000 psi 55 MPa
10,000 psi 69 MPa
12,000 psi 83 MPa
19,000 psi 131 MPa
36,000 psi 248 MPa

Concrete degradation


Concrete spalling caused by the corrosion of rebar

Concrete can be damaged by many processes, such as the expansion of corrosion products of the steel reinforcement bars, freezing of trapped water, fire or radiant heat, aggregate expansion, sea water effects, bacterial corrosion, leaching, erosion by fast-flowing water, physical damage and chemical damage (from carbonatation, chlorides, sulfates and distillate water).[citation needed] The micro fungi Aspergillus Alternaria and Cladosporium were able to grow on samples of concrete used as a radioactive waste barrier in the Chernobyl reactor; leaching aluminium, iron, calcium and silicon.[59]

Microbial concrete

Bacteria such as Bacillus pasteurii, Bacillus pseudofirmus, Bacillus cohnii, Sporosarcina pasteuri, and Arthrobacter crystallopoietes increase the compression strength of concrete through their biomass. Not all bacteria increase the strength of concrete significantly with their biomass.[26]:143 Bacillus sp. CT-5. can reduce corrosion of reinforcement in reinforced concrete by up to four times. Sporosarcina pasteurii reduces water and chloride permeability. B. pasteurii increases resistance to acid.[26]:146 Bacillus pasteurii and B. sphaericuscan induce calcium carbonate precipitation in the surface of cracks, adding compression strength.[26]:147

Use of concrete in infrastructure


Aerial photo of reconstruction at Taum Sauk (Missouri) pumped storage facility in late November, 2009. After the original reservoir failed, the new reservoir was made of roller-compacted concrete.

Mass concrete structures

Large concrete structures such as dams, navigation locks, large mat foundations, and large breakwaters generate excessive heat during cement hydration and associated expansion. To mitigate these effects post-cooling[60] is commonly applied during construction. An early example at Hoover Dam, installed a network of pipes between vertical concrete placements to circulate cooling water during the curing process to avoid damaging overheating. 
Similar systems are still used; depending on volume of the pour, the concrete mix used, and ambient air temperature, the cooling process may last for many months after the concrete is placed. Various methods also are used to pre-cool the concrete mix in mass concrete structures.[60]
Another approach to mass concrete structures that is becoming more widespread is the use of roller-compacted concrete, which uses much lower amounts of cement and water than conventional concrete mixtures and is generally not poured into place. Instead it is placed in thick layers as a semi-dry material and compacted into a dense, strong mass with rolling compactors. Because it uses less cementitious material, roller-compacted concrete has a much lower cooling requirement than conventional concrete.

Prestressed concrete structures


40-foot cacti decorate a sound/retaining wall in Scottsdale, Arizona

Prestressed concrete is a form of reinforced concrete that builds in compressive stresses during construction to oppose those experienced in use. This can greatly reduce the weight of beams or slabs, by better distributing the stresses in the structure to make optimal use of the reinforcement. For example, a horizontal beam tends to sag.
Prestressed reinforcement along the bottom of the beam counteracts this. In pre-tensioned concrete, the prestressing is achieved by using steel or polymer tendons or bars that are subjected to a tensile force prior to casting, or for post-tensioned concrete, after casting.

Concrete textures

When one thinks of concrete, the image of a dull, gray concrete wall often comes to mind. With the use of form liner, concrete can be cast and molded into different textures and used for decorative concrete applications. Sound/retaining walls, bridges, office buildings and more serve as the optimal canvases for concrete art. For example, the Pima Freeway/Loop 101 retaining and sound walls in Scottsdale, Arizona, feature desert flora and fauna, a 67-foot (20 m) lizard and 40-foot (12 m) cacti along the 8-mile (13 km) stretch. The project, titled "The Path Most Traveled," is one example of how concrete can be shaped using elastomeric form liner.

Building with concrete


Concrete is one of the most durable building materials. It provides superior fire resistance compared with wooden construction and gains strength over time. Structures made of concrete can have a long service life. Concrete is used more than any other manmade material in the world.[61] As of 2006, about 7.5 billion cubic meters of concrete are made each year, more than one cubic meter for every person on Earth.[62]

More than 55,000 miles (89,000 km) of highways in the United States are paved with this material. Reinforced concrete, prestressed concrete and precast concrete are the most widely used types of concrete functional extensions in modern days. See Brutalism.

Concrete Roads

Concrete roads are more fuel efficient to drive on,[63] more reflective and last significantly longer than other paving surfaces, yet have a much smaller market share than other paving solutions. Modern paving methods and design practices have changed the economics of concrete paving, so that a well designed and placed concrete pavement will be less expensive on initial costs and significantly less expensive over the life cycle.

Energy efficiency

Energy requirements for transportation of concrete are low because it is produced locally from local resources, typically manufactured within 100 kilometers of the job site. Similarly, relatively little energy is used in producing and combining the raw materials (although large amounts of CO2 are produced by the chemical reactions in cement manufacture).[citation needed] The overall embodied energy of concrete is therefore lower than for most structural materials other than wood.[citation needed]

Once in place, concrete offers great energy efficiency over the lifetime of a building.[64] Concrete walls leak air far less than those made of wood frames[citation needed]. Air leakage accounts for a large percentage of energy loss from a home. The thermal mass properties of concrete increase the efficiency of both residential and commercial buildings. By storing and releasing the energy needed for heating or cooling, concrete's thermal mass delivers year-round benefits by reducing temperature swings inside and minimizing heating and cooling costs.[65] While insulation reduces energy loss through the building envelope, thermal mass uses walls to store and release energy. Modern concrete wall systems use both external insulation and thermal mass to create an energy-efficient building. Insulating concrete forms (ICFs) are hollow blocks or panels made of either insulating foam or rastra that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.

Pervious concrete

Pervious concrete is a mix of specially graded coarse aggregate, cement, water and little-to-no fine aggregates. This concrete is also known as "no-fines" or porous concrete. Mixing the ingredients in a carefully controlled process creates a paste that coats and bonds the aggregate particles. The hardened concrete contains interconnected air voids totalling approximately 15 to 25 percent. Water runs through the voids in the pavement to the soil underneath. Air entrainment admixtures are often used in freeze–thaw climates to minimize the possibility of frost damage.

Nano concrete

Concrete is the most widely manufactured construction material. The addition of carbon nanofibres to concrete has many advantages in terms of mechanical and electrical properties (e.g. higher strength and higher Young’s modulus) and self-monitoring behavior due to the high tensile strength and high conductivity. Mullapudi[66] used the pulse velocity method to characterize the properties of concrete containing carbon nanofibres. The test results indicate that the compressive strength and percentage reduction in electrical resistance while loading concrete containing carbon nanofibres differ from those of plain concrete. A reasonable concentration of carbon nanofibres need to be determined for use in concrete, which not only enhances compressive strength, but also improves the electrical properties required for strain monitoring, damage evaluation and self-health monitoring of concrete. See also:  

Mixing concrete

Fire safety


A modern building: Boston City Hall (completed 1968) is constructed largely of concrete, both precast and poured in place. Of Brutalist architecture, it was voted "The World's Ugliest Building" in 2008.

Concrete buildings are more resistant to fire than those constructed using steel frames, since concrete has lower heat conductivity than steel and can thus last longer under the same fire conditions. Concrete is sometimes used as a fire protection for steel frames, for the same effect as above. Concrete as a fire shield, for example Fondu fyre, can also be used in extreme environments like a missile launch pad.

Options for non-combustible construction include floors, ceilings and roofs made of cast-in-place and hollow-core precast concrete. For walls, concrete masonry technology and Insulating Concrete Forms (ICFs) are additional options. ICFs are hollow blocks or panels made of fireproof insulating foam that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.

Concrete also provides good resistance against externally applied forces such as high winds, hurricanes, and tornadoes owing to its lateral stiffness, which results in minimal horizontal movement. However this stiffness can work against certain types of concrete structures, particularly where a relatively higher flexing structure is require to resist more extreme forces.

Earthquake safety

As discussed above, concrete is very strong in compression, but weak in tension. Larger earthquakes can generate very large shear loads on structures. These shear loads subject the structure to both tensile and compressional loads. Concrete structures without reinforcement, like other unreinforced masonry structures, can fail during severe earthquake shaking. Unreinforced masonry structures constitute one of the largest earthquake risks globally.[67]
These risks can be reduced through seismic retrofitting of at-risk buildings, (e.g. school buildings in Istanbul, Turkey[68]).

Useful life


The Tunkhannock Viaduct was begun in 1912 and is still in regular service as of 2014.

Concrete can be viewed as a form of artificial sedimentary rock. As a type of mineral, the compounds of which it is composed are extremely stable.[69] Many concrete structures are built with an expected lifetime of approximately 100 years,[70] but researchers have suggested that adding silica fume could extend the useful life of bridges and other concrete uses to as long as 16,000 years.[71] Coatings are also available to protect concrete from damage, and extend the useful life. Epoxy coatings may be applied only to interior surfaces, though, as they would otherwise trap moisture in the concrete.[72]

A self-healing concrete has been developed that can also last longer than conventional concrete.[73]

Large dams, such as the Hoover Dam, and the Three Gorges Dam are intended to last "forever", a period that is not quantified.[74]

World records

The world record for the largest concrete pour in a single project is the Three Gorges Dam in Hubei Province, China by the Three Gorges Corporation. The amount of concrete used in the construction of the dam is estimated at 16 million cubic meters over 17 years. The previous record was 12.3 million cubic meters held by Itaipu hydropower station in Brazil.[75][76][76][77]

The world record for concrete pumping was set on 7 August 2009 during the construction of the Parbati Hydroelectric Project, near the village of Suind, Himachal Pradesh, India, when the concrete mix was pumped through a vertical height of 715 m (2,346 ft).[78][79]

The world record for the largest continuously poured concrete raft was achieved in August 2007 in Abu Dhabi by contracting firm Al Habtoor-CCC Joint Venture and the concrete supplier is Unibeton Ready Mix.[80][81] The pour (a part of the foundation for the Abu Dhabi's Landmark Tower) was 16,000 cubic meters of concrete poured within a two-day period.[82] The previous record, 13,200 cubic meters poured in 54 hours despite a severe tropical storm requiring the site to be covered with tarpaulins to allow work to continue, was achieved in 1992 by joint Japanese and South Korean consortiums Hazama Corporation and the Samsung C&T Corporation for the construction of the Petronas Towers in Kuala Lumpur, Malaysia.[83]

The world record for largest continuously poured concrete floor was completed 8 November 1997, in Louisville, Kentucky by design-build firm EXXCEL Project Management. The monolithic placement consisted of 225,000 square feet (20,900 m2) of concrete placed within a 30-hour period, finished to a flatness tolerance of FF 54.60 and a levelness tolerance of FL 43.83. This surpassed the previous record by 50% in total volume and 7.5% in total area.[84][85]

The record for the largest continuously placed underwater concrete pour was completed 18 October 2010, in New Orleans, Louisiana by contractor C. J. Mahan Construction Company, LLC of Grove City, Ohio. The placement consisted of 10,251 cubic yards of concrete placed in a 58.5 hour period using two concrete pumps and two dedicated concrete batch plants. Upon curing, this placement allows the 50,180-square-foot (4,662 m2) cofferdam to be dewatered approximately 26 feet (7.9 m) below sea level to allow the construction of the Inner Harbor Navigation Canal Sill & Monolith Project to be completed in the dry.[86]

Radiation hormesis


From Wikipedia, the free encyclopedia


Alternative assumptions for the extrapolation of the cancer risk vs. radiation dose to low-dose levels, given a known risk at a high dose: supra-linearity (A), linear (B), linear-quadratic (C) and hormesis (D).

Radiation hormesis (also called radiation homeostasis) is the hypothesis that low doses of ionizing radiation (within the region of and just above natural background levels) are beneficial, stimulating the activation of repair mechanisms that protect against disease, that are not activated in absence of ionizing radiation. The reserve repair mechanisms are hypothesized to be sufficiently effective when stimulated as to not only cancel the detrimental effects of ionizing radiation but also inhibit disease not related to radiation exposure (see hormesis).[1][2][3][4] This counter-intuitive hypothesis has captured the attention of scientists and public alike in recent years.[5]

While the effects of high and acute doses of ionising radiation are easily observed and understood in humans (e.g. Japanese Atomic Bomb survivors), the effects of low-level radiation are very difficult to observe and highly controversial. This is because baseline cancer rate is already very high and the risk of developing cancer fluctuates 40% because of individual life style and environmental effects,[6][7] obscuring the subtle effects of low-level radiation. An acute dose of 100 mSv may increase cancer risk by ~0.8%.

Government and regulatory bodies disagree on the existence of radiation hormesis.

Quoting results from a literature database research, the Académie des Sciences — Académie nationale de Médecine (French Academy of SciencesNational Academy of Medicine) stated in their 2005 report concerning the effects of low-level radiation that many laboratory studies have observed radiation hormesis.[8][9] However, they cautioned that it is not yet known if radiation hormesis occurs outside the laboratory, or in humans.[10]

Reports by the United States National Research Council and the National Council on Radiation Protection and Measurements and the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) argue[11] that there is no evidence for hormesis in humans and in the case of the National Research Council, that hormesis is outright rejected as a possibility despite population and scientific evidence.[12] Therefore, estimating Linear no-threshold model (LNT) continues to be the model generally used by regulatory agencies for human radiation exposure.

Proposed mechanism and ongoing debate


A very low dose of a chemical agent may trigger from an organism the opposite response to a very high dose.

Radiation hormesis proposes that radiation exposure comparable to and just above the natural background level of radiation is not harmful but beneficial, while accepting that much higher levels of radiation are hazardous.
Proponents of radiation hormesis typically claim that radio-protective responses in cells and the immune system not only counter the harmful effects of radiation but additionally act to inhibit spontaneous cancer not related to radiation exposure. Radiation hormesis stands in stark contrast to the more generally accepted linear no-threshold model (LNT), which states that the radiation dose-risk relationship is linear across all doses, so that small doses are still damaging, albeit less so than higher ones. Opinion pieces on chemical and radiobiological hormesis appeared in the journals Nature[1] and Science[3] in 2003.

Assessing the risk of radiation at low doses (<100 mSv) and low dose rates (<0.1 mSv.min−1) is highly problematic and controversial.[13][14] While epidemiological studies on populations of people exposed to an acute dose of high level radiation such as Japanese Atomic Bomb Survivors (hibakusha (被爆者?)) have robustly upheld the LNT (mean dose ~210 mSv),[15] studies involving low doses and low dose rates have failed to detect any increased cancer rate.[14] This is because the baseline cancer rate is already very high (~42 of 100 people will be diagnosed in their lifetime) and it fluctuates ~40% because of lifestyle and environmental effects,[7][16] obscuring the subtle effects of low level radiation. Epidemiological studies maybe capable of detecting elevated cancer rates as low as 1.2 to 1.3 i.e. 20% to 30% increase. But for low doses (1–100 mSv) the predicted elevated risks are only 1.001 to 1.04 and excess cancer cases, if present, cannot be detected due to confounding factors, errors and biases.[16][17] New studies, however, have concluded that there is, in fact, a statistically significant indication of Radiation hormesis.[18]

In particular, variations in smoking prevalence or even accuracy in reporting smoking cause wide variation in excess cancer and measurement error bias. Thus, even a large study of many thousands of subjects with imperfect smoking prevalence information will fail to detect the effects of low level radiation than a smaller study that properly compensates for smoking prevalence.[19] Given the absence of direct epidemiological evidence, there is considerable debate as to whether the dose-response relationship <100 mSv is supralinear, linear (LNT), has a threshold or sub-linear i.e. a hormetic response.

While most major consensus reports and government bodies currently adhere to LNT,[20] the 2005 French Academy of Sciences-National Academy of Medicine's report concerning the effects of low-level radiation rejected LNT as a scientific model of carcinogenic risk at low doses.[10]
"Using LNT to estimate the carcinogenic effect at doses of less than 20 mSv is not justified in the light of current radiobiologic knowledge."
They consider there to be several dose-effect relationships rather than only one, and that these relationships have many variables such as target tissue, radiation dose, dose rate and individual sensitivity factors. They request that further study is required on low doses (less than 100 mSv) and very low doses (less than 10 mSv) as well as the impact of tissue type and age. The Academy considers the LNT model is only useful for regulatory purposes as it simplifies the administrative task. Quoting results from literature research,[8][9] they furthermore claim that approximately 40% of laboratory studies on cell cultures and animals indicate some degree of chemical or radiobiological hormesis, and state:
"...its existence in the laboratory is beyond question and its mechanism of action appears well understood."
They go on to outline a growing body of research that illustrates that the human body is not a passive accumulator of radiation damage but it actively repairs the damage caused via a number of different processes, including:[10][14]
Furthermore, increased sensitivity to radiation induced cancer in the inherited condition Ataxia-telangiectasia like disorder, illustrates the damaging effects of loss of the repair gene Mre11h resulting in the inability to fix DNA double-strand breaks.[21]

The BEIR-VII report argued that, "the presence of a true dose threshold demands totally error-free DNA damage response and repair." The specific damage they worry about is double strand breaks (DSBs) and they continue, "error-prone nonhomologous end joining (NHEJ) repair in postirradiation cellular response, argues strongly against a DNA repair-mediated low-dose threshold for cancer initiation".[22] Resent research observed that DSBs caused by CAT scans are repaired within 24-hours and DSBs maybe more efficiently repaired at low doses, suggesting the risk ionizing radiation at low doses may not by directly proportional to the dose.[23][24] However, it is not known if low dose ionizing radiation stimulates the repair of DSBs not caused by ionizing radiation i.e. a hormetic response.

Radon gas in homes is the largest source of radiation dose for most individuals and it is generally advised that the concentration be kept below 150 Bq/m³ (4 pCi/L).[25] A recent retrospective case-control study of lung cancer risk showed substantial cancer rate reduction between 50 and 123 Bq per cubic meter relative to a group at zero to 25 Bq per cubic meter.[26] This study is cited as evidence for hormesis, but a single study all by itself cannot be regarded as definitive. Other studies into the effects of domestic radon exposure have not reported a hormetic effect; including for example the respected "Iowa Radon Lung Cancer Study" of Field et al. (2000), which also used sophisticated radon exposure dosimetry.[27] In addition, Darby et al. (2005) argue that radon exposure is negatively correlated with the tendency to smoke and environmental studies need to accurately control for this; people living in urban areas where smoking rates are higher usually have lower levels of radon exposure due the increased prevalence of multi-story dwellings.[28] When doing so, they found a significant increase in lung cancer amongst smokers exposed to radon at doses as low as 100 to 199 Bq m−3 and warned that smoking greatly increases the risk posed by radon exposure i.e. reducing the prevalence of smoking would decrease deaths caused by radon.[28][29]

Furthermore, particle microbeam studies show that passage of even a single alpha particle (e.g. from radon and its progeny) through cell nuclei is highly mutagenic,[30] and that alpha radiation may have a higher mutagenic effect at low doses (even if a small fraction of cells are hit by alpha particles) than predicted by linear no-threshold model, a phenomenon attributed to bystander effect.[31] However, there is currently insufficient evidence at hand to suggest that the bystander effect promotes carcinogenesis in humans at low doses.[32]

Statements by leading nuclear bodies

Radiation hormesis has not been accepted by either the United States National Research Council,[33] or the National Council on Radiation Protection and Measurements.[34] In addition, the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) wrote in its most recent report:[35]
Until the [...] uncertainties on low-dose response are resolved, the Committee believes that an increase in the risk of tumour induction proportionate to the radiation dose is consistent with developing knowledge and that it remains, accordingly, the most scientifically defensible approximation of low-dose response. However, a strictly linear dose response should not be expected in all circumstances.
This is a reference to the fact that very low doses of radiation have only marginal impacts on individual health outcomes. It is therefore difficult to detect the 'signal' of decreased or increased morbidity and mortality due to low-level radiation exposure in the 'noise' of other effects. The notion of radiation hormesis has been rejected by the National Research Council's (part of the National Academy of Sciences) 16-year-long study on the Biological Effects of Ionizing Radiation. "The scientific research base shows that there is no threshold of exposure below which low levels of ionizing radiation can be demonstrated to be harmless or beneficial. The health risks – particularly the development of solid cancers in organs – rise proportionally with exposure" says Richard R. Monson, associate dean for professional education and professor of epidemiology, Harvard School of Public Health, Boston.[36][37]
The possibility that low doses of radiation may have beneficial effects (a phenomenon often referred to as “hormesis”) has been the subject of considerable debate. Evidence for hormetic effects was reviewed, with emphasis on material published since the 1990 BEIR V study on the health effects of exposure to low levels of ionizing radiation. Although examples of apparent stimulatory or protective effects can be found in cellular and animal biology, the preponderance of available experimental information does not support the contention that low levels of ionizing radiation have a beneficial effect. The mechanism of any such possible effect remains obscure. At this time, the assumption that any stimulatory hormetic effects from low doses of ionizing radiation will have a significant health benefit to humans that exceeds potential detrimental effects from radiation exposure at the same dose is unwarranted.
[37]

Studies of low level radiation

Very high natural background gamma radiation cancer rates at Kerala, India

Kerala's monazite sand (containing a third of the world's economically recoverable reserves of radioactive thorium) emits about 8 micro Sieverts per hour of gamma radiation, 80 times the dose rate equivalent in London, but a decade long study of 69,985 residents published in Health Physics in 2009: "showed no excess cancer risk from exposure to terrestrial gamma radiation. The excess relative risk of cancer excluding leukemia was estimated to be -0.13 Gy_1 (95% CI: -0.58, 0.46)", indicating no statistically significant positive or negative relationship between background radiation levels and cancer risk in this sample.[38]

Cultures

Studies in cell cultures can be useful for finding mechanisms for biological processes, but they also can be criticized for not effectively capturing the whole of the living organism.

A study by E.I. Azzam suggested that pre-exposure to radiation causes cells to turn on protection mechanisms.[39] A different study by de Toledo and collaborators, has shown that irradiation with gamma rays increases the concentration of glutathione, an antioxidant found in cells.[40]

In 2011, an in vitro study led by S.V. Costes showed in time-lapse images a strongly non-linear response of certain cellular repair mechanisms called radiation-induced foci (RIF). The study found that low doses of radiation prompted higher rates of RIF formation than high doses, and that after low-dose exposure RIF continued to form after the radiation had ended. Measured rates of RIF formation were 15 RIF/Gy at 2 Gy, and 64 RIF/Gy at .1 Gy.[24] These results suggest that low dose levels of ionizing radiation may not increase cancer risk directly proportional to dose and thus contradict the linear-no-threshold standard model.[41] Mina Bissell, a world-renowned breast cancer researcher and collaborator in this study stated “Our data show that at lower doses of ionizing radiation, DNA repair mechanisms work much better than at higher doses. This non-linear DNA damage response casts doubt on the general assumption that any amount of ionizing radiation is harmful and additive.”[41]

Animals

A study by Otsuka and collaborators find hormesis in animals.[42] Miyachi conducted a study on mice and found that a 200 mGy X-ray dose protects mice against both further X-ray exposure and ozone gas.[43] In another rodent study, Sakai and collaborators found that (1 mGy/hr) gamma irradiation prevents the development of cancer (induced by chemical means, injection of methylcholanthrene).[44]

In a 2006 paper,[45] a dose of 1 Gy was delivered to the cells (at constant rate from a radioactive source) over a series of lengths of time. These were between 8.77 and 87.7 hours, the abstract states for a dose delivered over 35 hours or more (low dose rate) no transformation of the cells occurred. Also for the 1 Gy dose delivered over 8.77 to 18.3 hours that the biological effect (neoplastic transformation) was about "1.5 times less than that measured at high dose rate in previous studies with a similar quality of [X-ray] radiation." Likewise it has been reported that fractionation of gamma irradiation reduces the likelihood of a neoplastic transformation.[46] Pre-exposure to fast neutrons and gamma rays from Cs-137 is reported to increase the ability of a second dose to induce a neoplastic transformation.[47]

Caution must be used in interpreting these results, as it noted in the BEIR VII report, these pre-doses can also increase cancer risk:
In chronic low-dose experiments with dogs (75 mGy/d for the duration of life), vital hematopoietic progenitors showed increased radioresistance along with renewed proliferative capacity (Seed and Kaspar 1992). Under the same conditions, a subset of animals showed an increased repair capacity as judged by the unscheduled DNA synthesis assay (Seed and Meyers 1993). Although one might interpret these observations as an adaptive effect at the cellular level, the exposed animal population experienced a high incidence of myeloid leukemia and related myeloproliferative disorders. The authors concluded that “the acquisition of radioresistance and associated repair functions under the strong selective and mutagenic pressure of chronic radiation is tied temporally and causally to leukemogenic transformation by the radiation exposure” (Seed and Kaspar 1992).
—BEIR VII report, [37]
However, 75 mGy/d cannot be accurately described as low dose rate radiation - it is over 27 Sieverts per year. The same study on dogs showed no increase in cancer nor reduction in life expectancy for dogs irradiated at 3 mGy/day.[48]

Humans

Effects of sunlight exposure

In an Australian study which analyzed the association between solar UV exposure and DNA damage, the results indicated that although the frequency of cells with chromosome breakage increased with increasing sun exposure, the misrepair of DNA strand breaks decreased as sun exposure was heightened.[49]

Effects of cobalt-60 exposure

The health of the inhabitants of radioactive apartment buildings in Taiwan has received prominent attention in popular treatments of radiation hormesis. In 1982, more than 20,000 tons of steel was accidentally contaminated with cobalt-60, much of this radioactive steel was used to build apartments and exposed thousands of Taiwanese to gamma radiation levels of up to >1000 times background (ave. 47.7 mSv, max. 2360 mSv excess cumulative dose); it was not until 1992 that the radioactive contamination was discovered. A medical study published in 2004 claimed the cancer mortality rates in the exposed population were much lower than expected.[50] However, this initial study failed to control for age, comparing a much younger exposed population (mean age 17.2 years at initial exposure) with the much older general population of Taiwan (mean age approx. 34 years in 2004), a serious flaw.[51][52] Older people have much higher cancer rates even in the absence of excess radiation exposure. However, Chen et al. did find a lower cancer incidence with time, still the opposite of what would be expected, even with a younger population.[citation needed]

A subsequent study by Hwang et al. (2006) found the incidence of "all cancers" in the irradiated population was 40% lower than expected (95 vs. 160.3 cases expected), except for leukaemia in men (6 vs. 1.8 cases expected) and thyroid cancer in women (6 vs. 2.8 cases expected), an increase only detected amongst those exposed before the age of 30. Hwang et al. proposed that the lower rate of "all cancers" might due to the exposed populations higher socioeconomic status and thus overall healthier lifestyle, but this was difficult to prove. Additionally, they cautioned that leukaemia was the first cancer type found to be elevated amongst the survivors of the Hiroshima and Nagasaki bombings, so it may be decades before any increase in more common cancer types are seen.[51]

Besides the excess risks of leukaemia and thyroid cancer, a later publication notes various DNA anomalies and other health effects among the exposed population:[53]
There have been several reports concerning the radiation effects on the exposed population, including cytogenetic analysis that showed increased micronucleus frequencies in peripheral lymphocytes in the exposed population, increases in acentromeric and single or multiple centromeric cytogenetic damages, and higher frequencies of chromosomal translocations, rings and dicentrics. Other analyses have shown persistent depression of peripheral leucocytes and neutrophils, increased eosinophils, altered distributions of lymphocyte subpopulations, increased frequencies of lens opacities, delays in physical development among exposed children, increased risk of thyroid abnormalities, and late consequences in hematopoietic adaptation in children.

Effects of no radiation

Given the uncertain effects of low-level and very-low-level radiation, there is a pressing need for quality research in this area. An expert panel convened at the 2006 Ultra-Low-Level Radiation Effects Summit at Carlsbad, New Mexico, proposed the construction of an Ultra-Low-Level Radiation laboratory.[54] The laboratory, if built, will investigate the effects of almost no radiation on laboratory animals and cell cultures, and it will compare these groups to control groups exposed to natural radiation levels. Precautions would be made, for example, to remove potassium-40 from the food of laboratory animals. The expert panel believes that the Ultra-Low-Level Radiation laboratory is the only experiment that can explore with authority and confidence the effects of low-level radiation; that it can confirm or discard the various radiobiological effects proposed at low radiation levels e.g. LNT, threshold and radiation hormesis.[55]

The first preliminary results of the effects of almost no-radiation on cell cultures was reported by two research groups in 2011 and 2012; researchers in the US studied cell cultures protected from radiation in a steel chamber 650 meters underground at the Waste Isolation Pilot Plant in Carlsbad, New Mexico[56] and researchers in Europe reported the effects of almost no-radiation on mouse cells (pKZ1 transgenic chromosomal inversion assay).[57]

The Fukushima Disaster Wasn't Disastrous Because Of The Radiation

The Tohoku earthquake and tsunami that struck Japan in March of 2011 was a disaster of epic proportions – over 20,000 people died, over 300,000 left homeless, a blow to the country’s economic and infrastructure unlike anything in the last 40 years.

A week later, the Fukushima Daiichi nuclear plant, crippled by the tsunami, released a cloud of radiation that impacted neighboring prefectures and triggered a mass evacuation. The plant is still leaking.

But the real health and environmental impacts from the Fukushima reactors are nothing compared to the tsunami. Contrary to all the hype and fear, Fukushima is basically a large Superfund site. No one will die from Fukushima radiation, there will be no increased cancer rates, the food supply is not contaminated, the ocean nearby is not contaminated, most of the people can move back into their homes, and most of the other nuclear plants in Japan can start up just fine.

In fact, some Superfund sites in the United States have caused more health effects and environmental damage than the crippled Japanese reactors ever will.

But no Superfund site will ever have as much money spent on it as will Fukushima.

The Tohoku earthquake and tsunami that struck Japan in March of 2011 was a disaster of epic proportions – over 20,000 died, over 300,000 left homeless, a blow to the country’s economic and infrastructure unlike anything in the last 40 years. But the crippling of the Fukushima nuclear plants wasn’t the disastrous part. Source: Google Maps

The Tohoku earthquake and tsunami that struck Japan in March of 2011 was a disaster of epic proportions – over 20,000 died, over 300,000 left homeless, a blow to the country’s economic and infrastructure unlike anything in the last 40 years. But the crippling of the Fukushima nuclear plants wasn’t the disastrous part. Source: Google Maps

Years after those nuclear reactors withstood one of the largest earthquakes in history, only to fall to the largest tsunami in history, the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) made a very strong and important statement concerning radiation effects from the Fukushima disaster (UNSCEAR press release; NYTimes DotEarth):

“It is unlikely to be able to attribute any health effects in the future among the general public and the vast majority of workers from exposure to radiation following the leaks and explosions at the earthquake-damaged power plant in March of 2011.”

But there was still a flurry of debate several days ago on the 4th anniversary of the accident, with many outrageous claims on all sides (HuffPost; NBC; Al JazeeraHiroshimaSyndrome). However, as the Breakthrough Institute points out, the most important thing to take away from the last four years is what did not happen – and never will:

- Fukushima children have no more thyroid cancer rates than any other regions in Japan, and are actually lower than many.

The one thing the Japanese government did right was tell everyone to not eat anything from the area for a few months while radioactive iodine died away completely. New ultrasound diagnostic techniques found more thyroid nodules and cysts in all Japanese (these were already in the population), but the numbers for Fukushima children were actually lower than the rest of Japan (NIH; NYTimes; UN Report; Nuclear News; J. of Am. Phys. and Surg.; CBCnews; Hiroshima Syndrome; National Geographic; Asahi Shimbun). Unfortunately, some very unethical and greedy people knowingly reported the wrong data sets and claimed that thyroid cancers have exploded in Japan and Japanese children are dying by the thousands (Business Insider; Eco Child’s Play).

- Food from Fukushima is safe to eat, even the seafood.

Fukushima’s home cooked meals have no detectible radioactive cesium. Three of the prefecture’s food cooperatives tested two days-worth of meals from 100 households, and none had detectable radiation from Fukushima (MINPO). The fishing stocks off the Japanese coast are not contaminated (NAR).

- Radiation in most of the Evacuation Zone around Fukushima is low enough for people to move back.

Except for a relatively small region around the reactors, the risk of evacuees moving back to their homes are the same as driving a car (UNSCEAR). Yes, driving can be dangerous, but it is not a reason to live as a refugee for the rest of one’s life. On the other hand, the forced relocations of people in the evacuation zone is what caused all of the deaths and hardship that these people suffered in the aftermath of the reactor accident. But even with the fear and gross misrepresentations, about a third of the people from Fukushima want to return to their homes. About a third don’t want to and about a third are undecided (MINPO News).

- None of the crewmembers from the USS Reagan stationed at Fukushima have cancer rates or other ailments that are any different than the rest of the Navy.

A Pentagon study found mildly elevated levels of stress disorders, but radiation effects such as cancer were actually lower on the USS Reagan than most other ships. Which is reasonable since the total cumulative dose during the entire mission was only 0.08 mSv, way too low to cause any health effects (DTRA). But some unethical lawyers might make some real money.

- There is no, and never will be, a Fukushima Death Toll. No one received enough radiation to change the background cancer rates that normally exist in Japan.

For the general population in Fukushima prefecture, across Japan and beyond, the World Health Organization said, “the predicted risks are low and no observable increases in cancer rates above baseline rates are anticipated“. The Fukushima disaster doesn’t even rate on the scale of common radiation hazards. The EPA estimates that the radiation from natural radon gas in our homes results in over 20,000 cancer fatalities in the United States every year, although this number is based on the LNT model just like all predictions. This radon effect is equivalent to five times the total radiation released from Fukushima every year, with no outcry from those who cry out against radiation effects.

I know the lack of death and destruction is boring to some, and doesn’t fit into the evil picture painted by zealots against anything nuclear. But the amount of radiation released from Fukushima, while sounding big, was too small when spread out over so many miles to have much discernable effect on the Japanese people, now or in the future.
Or anyone else, anywhere.

But the stoking of fear and misrepresentation, the botched response and forced evacuations, the ridiculous limits on low levels of radiation, the closing of all nuclear plants and the increase of coal- oil- and gas-fired electricity, and the politicization of the tragedy – these have huge and lasting effects.

Yes, the Fukushima site is a mess. It will cost billions to clean up. It was completely avoidable but Japan did not have a working regulatory commission or the safety guidelines that are in place in America. And they ignored our repeated warnings.

But as to death and destruction, the Fukushima accident shows that nuclear power plant disasters are not very disastrous.

Genetically Engineering Almost Anything


Original link:  http://www.pbs.org/wgbh/nova/next/evolution/crispr-gene-drives/

When it comes to genetic engineering, we’re amateurs. Sure, we’ve known about DNA’s structure for more than 60 years, we first sequenced every A, T, C, and G in our bodies more than a decade ago, and we’re becoming increasingly adept at modifying the genes of a growing number of organisms.

But compared with what’s coming next, all that will seem like child’s play. A new technology just announced today has the potential to wipe out diseases, turn back evolutionary clocks, and reengineer entire ecosystems, for better or worse. Because of how deeply this could affect us all, the scientists behind it want to start a discussion now, before all the pieces come together over the next few months or years. This is a scientific discovery being played out in real time.

dna-repair-machinery

Today, researchers aren’t just dropping in new genes, they’re deftly adding, subtracting, and rewriting them using a series of tools that have become ever more versatile and easier to use. In the last few years, our ability to edit genomes has improved at a shockingly rapid clip. So rapid, in fact, that one of the easiest and most popular tools, known as CRISPR-Cas9, is just two years old. Researchers once spent months, even years, attempting to rewrite an organism’s DNA. Now they spend days.

Soon, though, scientists will begin combining gene editing with gene drives, so-called selfish genes that appear more frequently in offspring than normal genes, which have about a 50-50 chance of being passed on. With gene drives—so named because they drive a gene through a population—researchers just have to slip a new gene into a drive system and let nature take care of the rest. Subsequent generations of whatever species we choose to modify—frogs, weeds, mosquitoes—will have more and more individuals with that gene until, eventually, it’s everywhere.

“This is one of the most exciting confluences of different theoretical approaches in science I’ve ever seen.”

Cas9-based gene drives could be one of the most powerful technologies ever discovered by humankind. “This is one of the most exciting confluences of different theoretical approaches in science I’ve ever seen,” says Arthur Caplan, a bioethicist at New York University. “It merges population genetics, genetic engineering, molecular genetics, into an unbelievably powerful tool.”

We’re not there yet, but we’re extraordinarily close. “Essentially, we have done all of the pieces, sometimes in the same relevant species.” says Kevin Esvelt, a postdoc at Harvard University and the wunderkind behind the new technology. “It’s just no one has put it all together.”

It’s only a matter of time, though. The field is progressing rapidly. “We could easily have laboratory tests within the next few months and then field tests not long after that,” says George Church, a professor at Harvard University and Esvelt’s advisor. “That’s if everybody thinks it’s a good idea.”

It’s likely not everyone will think this is a good idea. “There are clearly people who will object,” Caplan says. “I think the technique will be incredibly controversial.” Which is why Esvelt, Church, and their collaborators are publishing papers now, before the different parts of the puzzle have been assembled into a working whole.
“If we’re going to talk about it at all in advance, rather than in the past tense,” Church says, “now is the time.”

“Deleterious Genes”

The first organism Esvelt wants to modify is the malaria-carrying mosquito Anopheles gambiae. While his approach is novel, the idea of controlling mosquito populations through genetic modification has actually been around since the late 1970s. Then, Edward F. Knipling, an entomologist with the U.S. Department of Agriculture, published a substantial handbook with a chapter titled “Use of Insects for Their Own Destruction.” One technique, he wrote, would be to modify certain individuals to carry “deleterious genes” that could be passed on generation after generation until they pervaded the entire population. It was an idea before its time. Knipling was on the right track, but he and his contemporaries lacked the tools to see it through.

The concept surfaced a few more times before being picked up by Austin Burt, an evolutionary biologist and population geneticist at Imperial College London. It was the late 1990s, and Burt was busy with his yeast cells, studying their so-called homing endonucleases, enzymes that facilitate the copying of genes that code for themselves. Self-perpetuating genes, if you will. “Through those studies, gradually, I became more and more familiar with endonucleases, and I came across the idea that you might be able to change them to recognize new sequences,” Burt recalls.

Other scientists were investigating endonucleases, too, but not in the way Burt was. “The people who were thinking along those lines, molecular biologists, were thinking about using these things for gene therapy,” Burt says. “My background in population biology led me to think about how they could be used to control populations that were particularly harmful.”
 
In 2003, Burt penned an influential article that set the course for an entire field: We should be using homing endonucleases, a type of gene drive, to modify malaria-carrying mosquitoes, he said, not ourselves. Burt saw two ways of going about it—one, modify a mosquito’s genome to make it less hospitable to malaria, and two, skew the sex ratio of mosquito populations so there are no females for the males to reproduce with. In the following years, Burt and his collaborators tested both in the lab and with computer models before they settled on sex ratio distortion. (Making mosquitoes less hospitable to malaria would likely be a stopgap measure at best; the Plasmodium protozoans could evolve to cope with the genetic changes, just like they have evolved resistance to drugs.)

Burt has spent the last 11 years refining various endonucleases, playing with different scenarios of inheritance, and surveying people in malaria-infested regions. Now, he finally feels like he is closing in on his ultimate goal.
“There’s a lot to be done still,” he says. “But on the scale of years, not months or decades.”

Cheating Natural Selection

Cas9-based gene drives could compress that timeline even further. One half of the equation—gene drives—are the literal driving force behind proposed population-scale genetic engineering projects. They essentially let us exploit evolution to force a desired gene into every individual of a species. “To anthropomorphize horribly, the goal of a gene is to spread itself as much as possible,” Esvelt says. “And in order to do that, it wants to cheat inheritance as thoroughly as it can.” Gene drives are that cheat.

Without gene drives, traits in genetically-engineered organisms released into the wild are vulnerable to dilution through natural selection. For organisms that have two parents and two sets of chromosomes (which includes humans, many plants, and most animals), traits typically have only a 50-50 chance of being inherited, give or take a few percent. Genes inserted by humans face those odds when it comes time to being passed on. But when it comes to survival in the wild, a genetically modified organism’s odds are often less than 50-50. Engineered traits may be beneficial to humans, but ultimately they tend to be detrimental to the organism without human assistance. Even some of the most painstakingly engineered transgenes will be gradually but inexorably eroded by natural selection.

Some naturally occurring genes, though, have over millions of years learned how to cheat the system, inflating their odds of being inherited. Burt’s “selfish” endonucleases are one example. They take advantage of the cell’s own repair machinery to ensure that they show up on both chromosomes in a pair, giving them better than 50-50 odds when it comes time to reproduce.

Computer science

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Computer_science Computer science is the study of computa...