Search This Blog

Wednesday, July 30, 2014

Hymenoptera -- That Most Amazing Order of Insects

Hymenoptera

From Wikipedia, the free encyclopedia
   
Hymenoptera
Temporal range: Triassic – Recent 251–0Ma
O
S
D
C
P
T
J
K
N
Orange Caterpillar Parasite Wasp.jpg
female Netelia producta
Scientific classification e
Kingdom:Animalia
Phylum:Arthropoda
Class:Insecta
Superorder:Hymenopterida
Order:Hymenoptera
Linnaeus, 1758
Suborders
Apocrita
Symphyta

The Hymenoptera are one of the largest orders of insects, comprising the sawflies, wasps, bees and ants. Over 150,000 species are recognized, with many more remaining to be described. The name refers to the wings of the insects, and is derived from the Ancient Greek ὑμήν (hymen): membrane and πτερόν (pteron): wing. The hind wings are connected to the fore wings by a series of hooks called hamuli.

Females typically have a special ovipositor for inserting eggs into hosts or otherwise inaccessible places. The ovipositor is often modified into a stinger. The young develop through holometabolism, (complete metamorphosis) — that is, they have a worm-like larval stage and an inactive pupal stage before they mature.

Evolution

Hymenoptera originated in the Triassic, the oldest fossils belonging to the family Xyelidae. Social hymenopterans appeared during the Cretaceous.[1] The evolution of this group has been intensively studied by A. Rasnitsyn, M. S. Engel, G. Dlussky, and others.

Anatomy

Hymenopterans range in size from very small to large insects, and usually have two pairs of wings. Their mouthparts are adapted for chewing, with well-developed mandibles (ectognathous mouthparts). Many species have further developed the mouthparts into a lengthy proboscis, with which they can drink liquids, such as nectar. They have large compound eyes, and typically three simple eyes, (ocelli).

The forward margin of the hind wing bears a number of hooked bristles, or "hamuli", which lock onto the fore wing, keeping them held together. The smaller species may have only two or three hamuli on each side, but the largest wasps may have a considerable number, keeping the wings gripped together especially tightly. Hymenopteran wings have relatively few veins compared with many other insects, especially in the smaller species.

In the more ancestral hymenopterans, the ovipositor is blade-like, and has evolved for slicing plant tissues. In the majority, however, it is modified for piercing, and, in some cases, is several times the length of the body. In some species, the ovipositor has become modified as a stinger, and the eggs are laid from the base of the structure, rather than from the tip, which is used only to inject venom. The sting is typically used to immobilise prey, but in some wasps and bees may be used in defense.[2]

The larvae of the more ancestral hymenopterans resemble caterpillars in appearance, and like them, typically feed on leaves. They have large chewing mandibles, three thoracic limbs, and, in most cases, a number of abdominal prolegs. Unlike caterpillars, however, the prolegs have no grasping spines, and the antennae are reduced to mere stubs.

The larvae of other hymenopterans, however, more closely resemble maggots, and are adapted to life in a protected environment. This may be the body of a host organism, or a cell in a nest, where the adults will care for the larva. Such larvae have soft bodies with no limbs. They are also unable to defecate until they reach adulthood due to having an incomplete digestive tract, presumably to avoid contaminating their environment.[2]

Sex determination

Among most or all hymenopterans, sex is determined by the number of chromosomes an individual possesses.[3] Fertilized eggs get two sets of chromosomes (one from each parent's respective gametes), and so develop into diploid females, while unfertilized eggs only contain one set (from the mother), and so develop into haploid males; the act of fertilization is under the voluntary control of the egg-laying female.[2] This phenomenon is called haplodiploidy.

However, the actual genetic mechanisms of haplodiploid sex determination may be more complex than simple chromosome number. In many Hymenoptera, sex is actually determined by a single gene locus with many alleles.[3] In these species, haploids are male and diploids heterozygous at the sex locus are female, but occasionally a diploid will be homozygous at the sex locus and develop as a male instead. This is especially likely to occur in an individual whose parents were siblings or other close relatives. Diploid males are known to be produced by inbreeding in many ant, bee and wasp species. Diploid biparental males are usually sterile but a few species that have fertile diploid males are known.[4]

One consequence of haplodiploidy is that females on average actually have more genes in common with their sisters than they do with their own daughters. Because of this, cooperation among kindred females may be unusually advantageous, and has been hypothesized to contribute to the multiple origins of eusociality within this order.[2] In many colonies of bees, ants, and wasps, worker females will remove eggs laid by other workers due to increased relatedness to direct siblings, a phenomenon known as worker policing.[5]

Diet

Different species of Hymenoptera show a wide range of feeding habits. The most primitive forms are typically herbivorous, feeding on leaves or pine needles. Stinging wasps are predators, and will provision their larvae with immobilised prey, while bees feed on nectar and pollen.

A number of species are parasitoid as larvae. The adults inject the eggs into a paralysed host, which they begin to consume after hatching. Some species are even hyperparasitoid, with the host itself being another parasitoid insect. Habits intermediate between those of the herbivorous and parasitoid forms are shown in some hymenopterans, which inhabit the galls or nests of other insects, stealing their food, and eventually killing and eating the occupant.[2]

Classification

Symphyta

The suborder Symphyta includes the sawflies, horntails, and parasitic wood wasps. The group may be paraphyletic, as it has been suggested that the family Orussidae may be the group from which the Apocrita arose. They have an unconstricted junction between the thorax and abdomen. The larvae are herbivorous free-living eruciforms, with three pairs of true legs, prolegs (on every segment, unlike Lepidoptera) and ocelli. The prolegs do not have crochet hooks at the ends unlike the larvae of the Lepidoptera.

Apocrita

The wasps, bees, and ants together make up the suborder Apocrita, characterized by a constriction between the first and second abdominal segments called a wasp-waist (petiole), also involving the fusion of the first abdominal segment to the thorax. Also, the larvae of all Apocrita do not have legs, prolegs, or ocelli.

Polytetrafluoroethylene

Polytetrafluoroethylene

From Wikipedia, the free encyclopedia
Polytetrafluoroethylene
Teflon structure.PNG
Perfluorodecyl-chain-from-xtal-Mercury-3D-balls.png
Identifiers
AbbreviationsPTFE
CAS number9002-84-0 YesY
KEGGD08974 N=
ChEBICHEBI:53251 N
Properties
Molecular formula(C2F4)n
Density2200 kg/m3
Melting point600 K
Thermal conductivity0.25 W/(m·K)
Hazards
MSDSExternal MSDS
NFPA 704
Flammability code 0: Will not burn. E.g., water Health code 1: Exposure would cause irritation but only minor residual injury. E.g., turpentine Reactivity code 0: Normally stable, even under fire exposure conditions, and is not reactive with water. E.g., liquid nitrogen Special hazards (white): no codeNFPA 704 four-colored diamond
0
1
0
Supplementary data page
Structure and
properties
n, εr, etc.
Thermodynamic
data
Phase behaviour
Solid, liquid, gas
Spectral dataUV, IR, NMR, MS
Except where noted otherwise, data are given for materials in their standard state (at 25 °C (77 °F), 100 kPa)
 N (verify) (what is: YesY/N?)
Infobox references
Polytetrafluoroethylene (PTFE) is a synthetic fluoropolymer of tetrafluoroethylene that has numerous applications. The best known brand name of PTFE-based formulas is Teflon by DuPont Co., who discovered the compound.

PTFE is a fluorocarbon solid, as it is a high-molecular-weight compound consisting wholly of carbon and fluorine. PTFE is hydrophobic: neither water nor water-containing substances wet PTFE, as fluorocarbons demonstrate mitigated London dispersion forces due to the high electronegativity of fluorine. PTFE has one of the lowest coefficients of friction against any solid.

PTFE is used as a non-stick coating for pans and other cookware. It is very non-reactive, partly because of the strength of carbon–fluorine bonds and so it is often used in containers and pipework for reactive and corrosive chemicals. Where used as a lubricant, PTFE reduces friction, wear and energy consumption of machinery. It is also commonly used as a graft material in surgical interventions.


History

External audio
“From stove tops to outer space... Teflon touches every one of us some way almost every day.”, Roy Plunkett, Chemical Heritage Foundation
Teflon thermal cover showing impact craters, from NASA's Ultra Heavy Cosmic Ray Experiment (UHCRE)
 
PTFE was accidentally discovered in 1938 by Roy Plunkett while he was working in New Jersey for Kinetic Chemicals. As Plunkett attempted to make a new chlorofluorocarbon refrigerant, the tetrafluoroethylene gas in its pressure bottle stopped flowing before the bottle's weight had dropped to the point signaling "empty." Since Plunkett was measuring the amount of gas used by weighing the bottle, he became curious as to the source of the weight, and finally resorted to sawing the bottle apart. He found the bottle's interior coated with a waxy white material that was oddly slippery.
Analysis showed that it was polymerized perfluoroethylene, with the iron from the inside of the container having acted as a catalyst at high pressure. Kinetic Chemicals patented the new fluorinated plastic (analogous to the already known polyethylene) in 1941,[2] and registered the Teflon trademark in 1945.[3][4]

By 1948, DuPont, which founded Kinetic Chemicals in partnership with General Motors, was producing over two million pounds (900 tons) of Teflon brand PTFE per year in Parkersburg, West Virginia.[5] An early use was in the Manhattan Project as a material to coat valves and seals in the pipes holding highly reactive uranium hexafluoride at the vast K-25 uranium enrichment plant in Oak Ridge, Tennessee.[6]

In 1954, the wife of French engineer Marc Grégoire urged him to try the material he had been using on fishing tackle on her cooking pans. He subsequently created the first Teflon-coated, non-stick pans under the brandname Tefal (combining "Tef" from "Teflon" and "al" from aluminum).[7] In the United States, Marion A. Trozzolo, who had been using the substance on scientific utensils, marketed the first US-made Teflon-coated pan, "The Happy Pan", in 1961.[8]

However, Tefal was not the only company to utilize PTFE in nonstick cookware coatings. In subsequent years, many cookware manufacturers developed proprietary PTFE-based formulas, including Swiss Diamond International which uses a diamond-reinforced PTFE formula ,[9] Scanpan which uses a titanium-reinforced PTFE formula,[10] Cuisinart's Chef's Classic and Advantage nonstick collections[11] and All-Clad[12] and Newell Rubbermaid's Calphalon which use a non-reinforced PTFE-based nonstick.[13] Other cookware companies, such as Meyer Corporation's Anolon, use Teflon[14] nonstick coatings purchased from DuPont.

In the 1990s, it was found that PTFE could be radiation cross-linked above its melting point in an oxygen-free environment.[15] Electron beam processing is one example of radiation processing. Cross-linked PTFE has improved high-temperature mechanical properties and radiation stability. This was significant because, for many years, irradiation at ambient conditions had been used to break down PTFE for recycling.[16] The radiation-induced chain scissioning allows it to be more easily reground and reused.

Production

PTFE is produced by free-radical polymerization of tetrafluoroethylene. The net equation is:
n F2C=CF2 → 1/n —{ F2C—CF2}n
Because tetrafluoroethylene can explosively decompose to tetrafluoromethane and carbon, special apparatus is required for the polymerization to prevent hot spots that might initiate this dangerous side reaction. The process is typically initiated with persulfate, which homolyzes to generate sulfate radicals:
[O3SO-OSO3]2− 2 SO4
The resulting polymer is terminated with sulfate ester groups, which can be hydrolyzed to give OH-end-groups.[17]

Because PTFE is poorly soluble in almost all solvents, the polymerization is conducted as an emulsion in water. This process gives a suspension of polymer particles. Alternatively, the polymerization is conducted using a surfactant such as PFOS.

Properties

PTFE is often used to coat non-stick pans as it is hydrophobic and possesses fairly high heat resistance.

PTFE is a thermoplastic polymer, which is a white solid at room temperature, with a density of about 2200 kg/m3. According to DuPont, its melting point is 600 K (327 °C; 620 °F).[18] It maintains high strength, toughness and self-lubrication at low temperatures down to 5 K (−268.15 °C; −450.67 °F), and good flexibility at temperatures above 194 K (−79 °C; −110 °F).[19] PTFE gains its properties from the aggregate effect of carbon-fluorine bonds, as do all fluorocarbons. The only chemicals known to affect these carbon-fluorine bonds are certain alkali metals and fluorinating agents such as xenon difluoride and cobalt(III) fluoride.[20]
PropertyValue
Density2200 kg/m3
Melting point600 K
Thermal expansion135 · 10−6 K−1 [21]
Thermal diffusivity0.124 mm²/s [22]
Young's modulus0.5 GPa
Yield strength23 MPa
Bulk resistivity1016 Ω·m [23]
Coefficient of friction0.05–0.10
Dielectric constantε=2.1,tan(δ)<5 td="">
Dielectric constant (60 Hz)ε=2.1,tan(δ)<2 td="">
Dielectric strength (1 MHz)60 MV/m

The coefficient of friction of plastics is usually measured against polished steel.[24] PTFE's coefficient of friction is 0.05 to 0.10,[18] which is the third-lowest of any known solid material (BAM being the first, with a coefficient of friction of 0.02; diamond-like carbon being second-lowest at 0.05). PTFE's resistance to van der Waals forces means that it is the only known surface to which a gecko cannot stick.[25] In fact, PTFE can be used to prevent insects climbing up surfaces painted with the material. PTFE is so slippery that insects cannot get a grip and tend to fall off. For example, PTFE is used to prevent ants climbing out of formicaria.

Because of its chemical inertness, PTFE cannot be cross-linked like an elastomer. Therefore, it has no "memory" and is subject to creep. Because of its superior chemical and thermal properties, PTFE is often used as a gasket material. However, because of the propensity to creep, the long-term performance of such seals is worse than for elastomers which exhibit zero, or near-zero, levels of creep. In critical applications, Belleville washers are often used to apply continuous force to PTFE gaskets, ensuring a minimal loss of performance over the lifetime of the gasket.[26]

Applications and uses

The major application of PTFE, consuming about 50% of production, is for wiring in aerospace and computer applications (e.g. hookup wire, coaxial cables). This application exploits the fact that PTFE has excellent dielectric properties. This is especially true at high radio frequencies, making it suitable for use as an insulator in cables and connector assemblies and as a material for printed circuit boards used at microwave frequencies. Combined with its high melting temperature, this makes it the material of choice as a high-performance substitute for the weaker and lower-melting-point polyethylene commonly used in low-cost applications.

Another major application is in fuel and hydraulic lines, due to PTFE's low resistance against flowing liquids. Colder temperatures at high altitudes cause these fluids to flow more slowly. Coating the lines's interior surfaces with low-resistance PTFE helps to compensate by allowing the liquids to move more easily.[17]

In industrial applications, owing to its low friction, PTFE is used for applications where sliding action of parts is needed: plain bearings, gears, slide plates, etc. In these applications, it performs significantly better than nylon and acetal; it is comparable to ultra-high-molecular-weight polyethylene (UHMWPE). Although UHMWPE is more resistant to wear than PTFE, for these applications, versions of PTFE with mineral oil or molybdenum disulfide embedded as additional lubricants in its matrix are being manufactured. Its extremely high bulk resistivity makes it an ideal material for fabricating long-life electrets, useful devices that are the electrostatic analogues of magnets.

Gore-Tex is a material incorporating a fluoropolymer membrane with micropores. The roof of the Hubert H. Humphrey Metrodome in Minneapolis, USA, is one of the largest applications of PTFE coatings. 20 acres (81,000 m2) of the material was used in the creation of the white double-layered PTFE-coated fiberglass dome.

Other

PTFE (Teflon) is best known for its use in coating non-stick frying pans and other cookware, as it is hydrophobic and possesses fairly high heat resistance.
PTFE tapes with pressure-sensitive adhesive backing

Niche

PTFE is a versatile material that is found in many niche applications:
  • It is used as a film interface patch for sports and medical applications, featuring a pressure-sensitive adhesive backing, which is installed in strategic high friction areas of footwear, insoles, ankle-foot orthosis, and other medical devices to prevent and relieve friction-induced blisters, calluses and foot ulceration.
  • Powdered PTFE is used in pyrotechnic compositions as an oxidizer with powdered metals such as aluminium and magnesium. Upon ignition, these mixtures form carbonaceous soot and the corresponding metal fluoride, and release large amounts of heat. They are used in infrared decoy flares and as igniters for solid-fuel rocket propellants.[27]
  • In optical radiometry, sheets of PTFE are used as measuring heads in spectroradiometers and broadband radiometers (e.g., illuminance meters and UV radiometers) due to PTFE's capability to diffuse a transmitting light nearly perfectly. Moreover, optical properties of PTFE stay constant over a wide range of wavelengths, from UV down to near infrared. In this region, the relation of its regular transmittance to diffuse transmittance is negligibly small, so light transmitted through a diffuser (PTFE sheet) radiates like Lambert's cosine law. Thus PTFE enables cosinusoidal angular response for a detector measuring the power of optical radiation at a surface, e.g. in solar irradiance measurements.
  • Certain types of hardened, armor-piercing bullets are coated with PTFE to reduce wear on firearms's rifling that harder projectiles would cause. PTFE itself does not give a projectile an armor-piercing property.[28]
  • Its high corrosion resistance makes PTFE useful in laboratory environments, where it is used for lining containers, as a coating for magnetic stirrers, and as tubing for highly corrosive chemicals such as hydrofluoric acid, which will dissolve glass containers. It is used in containers for storing fluoroantimonic acid, a superacid.[citation needed]
  • PTFE tubes are used in gas-gas heat exchangers in gas cleaning of waste incinerators. Unit power capacity is typically several megawatts.
  • PTFE is also widely used as a thread seal tape in plumbing applications, largely replacing paste thread dope.
  • PTFE membrane filters are among the most efficient industrial air filters. PTFE-coated filters are often used in dust collection systems to collect particulate matter from air streams in applications involving high temperatures and high particulate loads such as coal-fired power plants, cement production and steel foundries.
  • PTFE grafts can be used to bypass stenotic arteries in peripheral vascular disease if a suitable autologous vein graft is not available.

Safety

The pyrolysis of PTFE is detectable at 200 °C (392 °F), and it evolves several fluorocarbon gases and a sublimate. An animal study conducted in 1955 concluded that it is unlikely that these products would be generated in amounts significant to health at temperatures below 250 °C (482 °F).[29] More recently, however, a study documented birds having been killed by these decomposition products at 202 °C (396 °F), with unconfirmed reports of bird deaths as a result of non-stick cookware heated to as little as 163 °C (325 °F).[30]

While PTFE is stable and nontoxic at lower temperatures, it begins to deteriorate after the temperature of cookware reaches about 260 °C (500 °F), and decomposes above 350 °C (662 °F).[citation needed] These degradation by-products can be lethal to birds, and can cause flu-like symptoms in humans.[citation needed] In May, 2003, the environmental research and advocacy organization Environmental Working Group filed a 14-page brief with the U.S. Consumer Product Safety Commission petitioning for a rule requiring that cookware and heated appliances bearing non-stick coatings carry a label warning of hazards to people and to birds.

Meat is usually fried between 204 and 232 °C (399 and 450 °F), and most oils start to smoke before a temperature of 260 °C (500 °F) is reached, but there are at least two cooking oils (refined safflower oil and avocado oil) that have a higher smoke point than 260 °C (500 °F). Empty cookware can also exceed this temperature when heated.

PFOA

Perfluorooctanoic acid (PFOA, or C8) is used as a surfactant in the emulsion polymerization of PTFE. Overall, PTFE cookware is considered an insignificant exposure pathway to PFOA.[31][32]

Similar polymers

Teflon is also used as the trade name for a polymer with similar properties, perfluoroalkoxy polymer resin (PFA).

The Teflon trade name is also used for other polymers with similar compositions:
These retain the useful PTFE properties of low friction and nonreactivity, but are more easily formable. For example, FEP is softer than PTFE and melts at 533 K (260 °C; 500 °F); it is also highly transparent and resistant to sunlight.[33]

Dark Energy: The Biggest Mystery in the Universe

South Pole Telescope

Dark Energy: The Biggest Mystery in the Universe

At the South Pole, astronomers try to unravel a force greater than gravity that will determine the fate of the cosmos

Smithsonian Magazine |                 

Twice a day, seven days a week, from February to November for the past four years, two researchers have layered themselves with thermal underwear and outerwear, with fleece, flannel, double gloves, double socks, padded overalls and puffy red parkas, mummifying themselves until they look like twin Michelin Men. Then they step outside, trading the warmth and modern conveniences of a science station (foosball, fitness center, 24-hour cafeteria) for a minus-100-degree Fahrenheit featureless landscape, flatter than Kansas and one of the coldest places on the planet. They trudge in darkness nearly a mile, across a plateau of snow and ice, until they discern, against the backdrop of more stars than any hands-in-pocket backyard observer has ever seen, the silhouette of the giant disk of the South Pole Telescope, where they join a global effort to solve possibly the greatest riddle in the universe: what most of it is made of.

For thousands of years our species has studied the night sky and wondered if anything else is out there. Last year we celebrated the 400th anniversary of Galileo’s answer: Yes. Galileo trained a new instrument, the telescope, on the heavens and saw objects that no other person had ever seen: hundreds of stars, mountains on the Moon, satellites of Jupiter. Since then we have found more than 400 planets around other stars, 100 billion stars in our galaxy, hundreds of billions of galaxies beyond our own, even the faint radiation that is the echo of the Big Bang.

Now scientists think that even this extravagant census of the universe might be as out-of-date as the five-planet cosmos that Galileo inherited from the ancients. Astronomers have compiled evidence that what we’ve always thought of as the actual universe—me, you, this magazine, planets, stars, galaxies, all the matter in space—represents a mere 4 percent of what’s actually out there. The rest they call, for want of a better word, dark: 23 percent is something they call dark matter, and 73 percent is something even more mysterious, which they call dark energy.
“We have a complete inventory of the universe,” Sean Carroll, a California Institute of Technology cosmologist, has said, “and it makes no sense.”

Scientists have some ideas about what dark matter might be—exotic and still hypothetical particles—but they have hardly a clue about dark energy. In 2003, the National Research Council listed “What Is the Nature of Dark Energy?” as one of the most pressing scientific problems of the coming decades. The head of the committee that wrote the report, University of Chicago cosmologist Michael S. Turner, goes further and ranks dark energy as “the most profound mystery in all of science.”

The effort to solve it has mobilized a generation of astronomers in a rethinking of physics and cosmology to rival and perhaps surpass the revolution Galileo inaugurated on an autumn evening in Padua. They are coming to terms with a deep irony: it is sight itself that has blinded us to nearly the entire universe. And the recognition of this blindness, in turn, has inspired us to ask, as if for the first time: What is this cosmos we call home?

Scientists reached a consensus in the 1970s that there was more to the universe than meets the eye. In computer simulations of our galaxy, the Milky Way, theorists found that the center would not hold—based on what we can see of it, our galaxy doesn’t have enough mass to keep everything in place. As it rotates, it should disintegrate, shedding stars and gas in every direction. Either a spiral galaxy such as the Milky Way violates the laws of gravity, or the light emanating from it—from the vast glowing clouds of gas and the myriad stars—is an inaccurate indication of the galaxy’s mass.
But what if some portion of a galaxy’s mass didn’t radiate light? If spiral galaxies contained enough of such mystery mass, then they might well be obeying the laws of gravity. Astronomers dubbed the invisible mass “dark matter.”

“Nobody ever told us that all matter radiated,”Vera Rubin, an astronomer whose observations of galaxy rotations provided evidence for dark matter, has said. “We just assumed that it did.”
The effort to understand dark matter defined much of astronomy for the next two decades. Astronomers may not know what dark matter is, but inferring its presence allowed them to pursue in a new way an eternal question: What is the fate of the universe?

They already knew that the universe is expanding. In 1929, the astronomer Edwin Hubble had discovered that distant galaxies were moving away from us and that the farther away they got, the faster they seemed to be receding.

This was a radical idea. Instead of the stately, eternally unchanging still life that the universe once appeared to be, it was actually alive in time, like a movie. Rewind the film of the expansion and the universe would eventually reach a state of infinite density and energy—what astronomers call the Big Bang. But what if you hit fast-forward? How would the story end?

The universe is full of matter, and matter attracts other matter through gravity. Astronomers reasoned that the mutual attraction among all that matter must be slowing down the expansion of the universe. But they didn’t know what the ultimate outcome would be. Would the gravitational effect be so forceful that the universe would ultimately stretch a certain distance, stop and reverse itself, like a ball tossed into the air? Or would it be so slight that the universe would escape its grasp and never stop expanding, like a rocket leaving Earth’s atmosphere? Or did we live in an exquisitely balanced universe, in which gravity ensures a Goldilocks rate of expansion neither too fast nor too slow—so the universe would eventually come to a virtual standstill?

Assuming the existence of dark matter and that the law of gravitation is universal, two teams of astrophysicists—one led by Saul Perlmutter, at the Lawrence Berkeley National Laboratory, the other by Brian Schmidt, at Australian National University—set out to determine the future of the universe.
Throughout the 1990s the rival teams closely analyzed a number of exploding stars, or supernovas, using those unusually bright, short-lived distant objects to gauge the universe’s growth. They knew how bright the supernovas should appear at different points across the universe if the rate of expansion were uniform. By comparing how much brighter the supernovas actually did appear, astronomers figured they could determine how much the expansion of the universe was slowing down. But to the astronomers’ surprise, when they looked as far as halfway across the universe, six or seven billion light-years away, they found that the supernovas weren’t brighter—and therefore nearer—than expected. They were dimmer—that is, more distant. The two teams both concluded that the expansion of the universe isn’t slowing down. It’s speeding up.

The implication of that discovery was momentous: it meant that the dominant force in the evolution of the universe isn’t gravity. It is...something else. Both teams announced their findings in 1998. Turner gave the “something” a nickname: dark energy. It stuck. Since then, astronomers have pursued the mystery of dark energy to the ends of the Earth—literally.

“The South Pole has the harshest environment on Earth, but also the most benign,” says William Holzapfel, a University of California at Berkeley astrophysicist who was the on-site lead researcher at the South Pole Telescope (SPT) when I visited.

He wasn’t referring to the weather, though in the week between Christmas and New Year’s Day—early summer in the Southern Hemisphere—the Sun shone around the clock, the temperatures were barely in the minus single digits (and one day even broke zero), and the wind was mostly calm. Holzapfel made the walk from the National Science Foundation’s Amundsen-Scott South Pole Station (a snowball’s throw from the traditional site of the pole itself, which is marked with, yes, a pole) to the telescope wearing jeans and running shoes. One afternoon the telescope’s laboratory building got so warm the crew propped open a door.

But from an astronomer’s perspective, not until the Sun goes down and stays down—March through September— does the South Pole get “benign.”

“It’s six months of uninterrupted data,” says Holzapfel. During the 24-hour darkness of the austral autumn and winter, the telescope operates nonstop under impeccable conditions for astronomy. The atmosphere is thin (the pole is more than 9,300 feet above sea level, 9,000 of which are ice). The atmosphere is also stable, due to the absence of the heating and cooling effects of a rising and setting Sun; the pole has some of the calmest winds on Earth, and they almost always blow from the same direction.

Perhaps most important for the telescope, the air is exceptionally dry; technically, Antarctica is a desert. (Chapped hands can take weeks to heal, and perspiration isn’t really a hygiene issue, so the restriction to two showers a week to conserve water isn’t much of a problem. As one pole veteran told me, “The moment you go back through customs at Christchurch [New Zealand], that’s when you’ll need a shower.”) The SPT detects microwaves, a part of the electromagnetic spectrum that is particularly sensitive to water vapor. Humid air can absorb microwaves and prevent them from reaching the telescope, and moisture emits its own radiation, which could be misread as cosmic signals.

To minimize these problems, astronomers who analyze microwaves and submillimeter waves have made the South Pole a second home. Their instruments reside in the Dark Sector, a tight cluster of buildings where light and other sources of electromagnetic radiation are kept to a minimum. (Nearby are the Quiet Sector, for seismology research, and the Clean Air Sector, for climate projects.)

Astronomers like to say that for more pristine observing conditions, they would have to go into outer space—an exponentially more expensive proposition, and one that NASA generally doesn’t like to pursue unless the science can’t easily be done on Earth. (A dark energy satellite has been on and off the drawing board since 1999, and last year went “back to square one,” according to one NASA adviser.) At least on Earth, if something goes wrong with an instrument, you don’t need to commandeer a space shuttle to fix it.

The United States has maintained a year-round presence at the pole since 1956, and by now the National Science Foundation’s U.S. Antarctic Program has gotten life there down to, well, a science.
Until 2008, the station was housed in a geodesic dome whose crown is still visible above the snow. The new base station resembles a small cruise ship more than a remote outpost and sleeps more than 150, all in private quarters. Through the portholes that line the two floors, you can contemplate a horizon as hypnotically level as any ocean’s. The new station rests on lifts that, as snow accumulates, allow it to be jacked up two full stories.

The snowfall in this ultra-arid region may be minimal, but that which blows in from the continent’s edges can still make a mess, creating one of the more mundane tasks for the SPT’s winter-over crew.
Once a week during the dark months, when the station population shrinks to around 50, the two on-site SPT researchers have to climb into the telescope’s 33-foot-wide microwave dish and sweep it clean. The telescope gathers data and sends it to the desktops of distant researchers. The two “winter-overs” spend their days working on the data, too, analyzing it as if they were back home. But when the telescope hits a glitch and an alarm on their laptops sounds, they have to figure out what the problem is—fast.

“An hour of down time is thousands of dollars of lost observing time,” says Keith Vanderlinde, one of 2008’s two winter-overs. “There are always little things. A fan will break because it’s so dry down there, all the lubrication goes away. And then the computer will overheat and turn itself off, and suddenly we’re down and we have no idea why.” At that point, the environment might not seem so “benign” after all. No flights go to or from the South Pole from March to October (a plane’s engine oil would gelatinize), so if the winter-overs can’t fix whatever is broken, it stays broken—which hasn’t yet happened.

More than most sciences, astronomy depends on the sense of sight; before astronomers can reimagine the universe as a whole, they first have to figure out how to perceive the dark parts. Knowing what dark matter is would help scientists think about how the structure of the universe forms. Knowing what dark energy does would help scientists think about how that structure has evolved over time—and how it will continue to evolve.

Scientists have a couple of candidates for the composition of dark matter—hypothetical particles called neutralinos and axions. For dark energy, however, the challenge is to figure out not what it is but what it’s like. In particular, astronomers want to know if dark energy changes over space and time, or whether it’s constant. One way to study it is to measure so-called baryon acoustic oscillations. When the universe was still in its infancy, a mere 379,000 years old, it cooled sufficiently for baryons (particles made from protons and neutrons) to separate from photons (packets of light). This separation left behind an imprint—called the cosmic microwave background—that can still be detected today. It includes sound waves (“acoustic oscillations”) that coursed through the infant universe. The peaks of those oscillations represent regions that were slightly denser than the rest of the universe. And because matter attracts matter through gravity, those regions grew even denser as the universe aged, coalescing first into galaxies and then into clusters of galaxies. If astronomers compare the original cosmic microwave background oscillations with the distribution of galaxies at different stages of the universe’s history, they can measure the rate of the universe’s expansion.

Another approach to defining dark energy involves a method called gravitational lensing. According to Albert Einstein’s theory of general relativity, a beam of light traveling through space appears to bend because of the gravitational pull of matter. (Actually, it’s space itself that bends, and light just goes along for the ride.) If two clusters of galaxies lie along a single line of sight, the foreground cluster will act as a lens that distorts light coming from the background cluster. This distortion can tell astronomers the mass of the foreground cluster. By sampling millions of galaxies in different parts of the universe, astronomers should be able to estimate the rate at which galaxies have clumped into clusters over time, and that rate in turn will tell them how fast the universe expanded at different points in its history.

The South Pole Telescope uses a third technique, called the Sunyaev-Zel’dovich effect, named for two Soviet physicists, which draws on the cosmic microwave background. If a photon from the latter interacts with hot gas in a cluster, it experiences a slight increase in energy. Detecting this energy allows astronomers to map those clusters and measure the influence of dark energy on their growth throughout the history of the universe. That, at least, is the hope. “A lot of people in the community have developed what I think is a healthy skepticism. They say, ‘That’s great, but show us the money,’” says Holzapfel. “And I think within a year or two, we’ll be in a position to be able to do that.”

The SPT team focuses on galaxy clusters because they are the largest structures in the universe, often consisting of hundreds of galaxies—they are one million billion times the mass of the Sun. As dark energy pushes the universe to expand, galaxy clusters will have a harder time growing. They will become more distant from one another, and the universe will become colder and lonelier.
Galaxy clusters “are sort of like canaries in a coal mine in terms of structure formation,” Holzapfel says. If the density of dark matter or the properties of dark energy were to change, the abundance of clusters “would be the first thing to be altered.” The South Pole Telescope should be able to track galaxy clusters over time. “You can say, ‘At so many billion years ago, how many clusters were there, and how many are there now?’” says Holzapfel. “And then compare them to your predictions.”

Yet all these methods come with a caveat. They assume that we sufficiently understand gravity, which is not only the force opposing dark energy but has been the very foundation of physics for the past four centuries.

Twenty times a second, a laser high in the Sacramento Mountains of New Mexico aims a pulse of light at the Moon, 239,000 miles away. The beam’s target is one of three suitcase-size reflectors that Apollo astronauts planted on the lunar surface four decades ago. Photons from the beam bounce off the mirror and return to New Mexico. Total round-trip travel time: 2.5 seconds, more or less.

That “more or less” makes all the difference. By timing the speed-of-light journey, researchers at the Apache Point Observatory Lunar Laser-ranging Operation (APOLLO) can measure the Earth-Moon distance moment to moment and map the Moon’s orbit with exquisite precision. As in the apocryphal story of Galileo dropping balls from the Leaning Tower of Pisa to test the universality of free fall, APOLLO treats the Earth and Moon like two balls dropping in the gravitational field of the Sun. Mario Livio, an astrophysicist at the Space Telescope Science Institute in Baltimore, calls it an “absolutely incredible experiment.” If the orbit of the Moon exhibits even the slightest deviation from
Einstein’s predictions, scientists might have to rethink his equations—and perhaps even the existence of dark matter and dark energy.

“So far, Einstein is holding,” says one of APOLLO’s lead observers, astronomer Russet McMillan, as her five-year project passes the halfway point.

Even if Einstein weren’t holding, researchers would first have to eliminate other possibilities, such as an error in the measure of the mass of the Earth, Moon or Sun, before conceding that general relativity requires a corrective. Even so, astronomers know that they take gravity for granted at their own peril. They have inferred the existence of dark matter due to its gravitational effects on galaxies, and the existence of dark energy due to its anti-gravitational effects on the expansion of the universe. What if the assumption underlying these twin inferences—that we know how gravity works—is wrong? Can a theory of the universe even more outlandish than one positing dark matter and dark energy account for the evidence? To find out, scientists are testing gravity not only across the universe but across the tabletop. Until recently, physicists hadn’t measured gravity at extremely close ranges.

“Astonishing, isn’t it?” says Eric Adelberger, the coordinator of several gravity experiments taking place in a laboratory at the University of Washington, Seattle. “But it wouldn’t be astonishing if you tried to do it”—if you tried to test gravity at distances shorter than a millimeter. Testing gravity isn’t simply a matter of putting two objects close to each other and measuring the attraction between them. All sorts of other things may be exerting a gravitational influence.

“There’s metal here,” Adelberger says, pointing to a nearby instrument. “There’s a hillside over here”—waving toward some point past the concrete wall that encircles the laboratory. “There’s a lake over there.” There’s also the groundwater level in the soil, which changes every time it rains. Then there’s the rotation of the Earth, the position of the Sun, the dark matter at the heart of our galaxy.

Over the past decade the Seattle team has measured the gravitational attraction between two objects at smaller and smaller distances, down to 56 microns (or 1/500 of an inch), just to make sure that Einstein’s equations for gravity hold true at the shortest distances, too. So far, they do.

But even Einstein recognized that his theory of general relativity didn’t entirely explain the universe. He spent the last 30 years of his life trying to reconcile his physics of the very big with the physics of the very small—quantum mechanics. He failed.

Theorists have come up with all sorts of possibilities in an attempt to reconcile general relativity with quantum mechanics: parallel universes, colliding universes, bubble universes, universes with extra dimensions, universes that eternally reproduce, universes that bounce from Big Bang to Big Crunch to Big Bang.

Adam Riess, an astronomer who collaborated with Brian Schmidt on the discovery of dark energy, says he looks every day at an Internet site (xxx.lanl.gov/archive/astro-ph) where scientists post their analyses to see what new ideas are out there. “Most of them are pretty kooky,” he says. “But it’s possible that somebody will come out with a deep theory.”

For all its advances, astronomy turns out to have been laboring under an incorrect, if reasonable, assumption: what you see is what you get. Now astronomers have to adapt to the idea that the universe is not the stuff of us—in the grand scheme of things, our species and our planet and our galaxy and everything we have ever seen are, as theoretical physicist Lawrence Krauss of Arizona State University has said, “a bit of pollution.”

Yet cosmologists tend not to be discouraged. “The really hard problems are great,” says Michael Turner, “because we know they’ll require a crazy new idea.” As Andreas Albrecht, a cosmologist at the University of California at Davis, said at a recent conference on dark energy: “If you put the timeline of the history of science before me and I could choose any time and field, this is where I’d want to be.”

Richard Panek wrote about Einstein for Smithsonian in 2005. His book on dark matter and dark energy will appear in 2011.
 

 

Is Quantum Intuition Possible?

Quantum physics defies our physical intuition about how the world is supposed to work. In the quantum world, objects resist such classical banalities as “position” and “speed,” particles are waves and waves are particles, and the act of observing seems to change the system being observed
But what if we could develop a “quantum intuition” that would make this all seem as natural as an apple falling from a tree?

baby reading book twO
Credit: Eliza Sankar Gorton. Baby and book by Evil Sivan/Flickr, Calabi-Yau manifold by Lunch/Wikipedia, adapted under a Creative Commons license.
 
Physical intuition starts developing early, long before we ever encounter Newton’s laws on a blackboard. “Babies have a few skeletal principles that are built in to the brain and help them reason about and predict how objects should act and interact in the world,” says Kristy vanMarle, an infant cognition researcher at the University of Missouri. They understand, for instance, that objects can’t pass through each other, a notion that’s at odds with a quantum effect called tunneling, which allows objects to slip through barriers that, in the classic world, would be impenetrable. Presented with demonstrations in which objects appear to materialize inside closed boxes and pass through solid walls, babies consistently stare longer at these “magic” shows than they do at demos in which boxes act like boxes. Psychologists Susan Hespos (now at Northwestern University) and Renee Baillargeon (University of Illinois) found that this physical intuition kicks in as early as two and a half months, and vanMarle and her colleagues think that it is probably present from birth.

Babies also intuitively grasp that objects exist even when you’re not looking at them, a concept called “object permanence” that goes against the classic Copenhagen interpretation of quantum mechanics, in which an object can’t be said to have any definite properties until the moment at which it is observed. Since Jean Piaget first pegged object permanence as a milestone in infant development, psychology researchers have found evidence that ever-younger babies have some sense of it; affirming object permanence seems to be the main theme of peek-a-boo. (To someone who has truly taken quantum physics to heart, perhaps peek-a-boo never gets old.)

These innate notions, plus “elaborations” born from watching and interacting with the world, add up to a sort of “naïve physics” that we all grasp without any formal physics training, says vanMarle.
But what about building quantum intuition after that early mental groundwork has already been laid? Most students don’t begin studying quantum physics until college, when they already have both an intuitive and a formal, or mathematical, toolkit for classical physics.

Some college educators maintain that students should just stick to the math and forget trying to establish a “gut feeling” for quantum mechanics; I’ve argued the same about similarly difficult concepts in cosmology. No less an authority than Max Born, who received the 1954 Nobel Prize for his contributions to the foundation of quantum mechanics, felt that our minds just weren’t up to the task of “intuiting” quantum physics. As he wrote in “Atomic Physics,” first published in English in 1935, “The ultimate origin of the difficulty lies in the fact (or philosophical principle) that we are compelled to use the words of common language when we wish to describe a phenomenon, not by logical or mathematical analysis, but by a picture appealing to the imagination. Common language has grown by everyday experience and can never surpass these limits.”

Lord Kelvin took a similar tack, points out Daniel Styer, a professor of physics at Oberlin College and the author of “The Strange World of Quantum Mechanics“. In his “Baltimore lectures,” a series of talks delivered in 1884 at Johns Hopkins University, Kelvin said, “It seems to me that the test of ‘Do we or not understand a particular subject in physics?’ is, ‘Can we make a mechanical model of it?’” By that yardstick, says Styer, all efforts to understand quantum mechanics are doomed to fail.
“The experimental tests of Bell’s inequality prove that no mechanical model, regardless of how intricate or fanciful or baroque, will ever be able to reproduce all the results of quantum mechanics.”
But Kelvin wasn’t talking about quantum mechanics; he was struggling to grasp the theory of electromagnetism. Quantum mechanics doesn’t have a monopoly on mind-blowing, after all; physicists have been upending intuition for thousands of years. “When I teach freshman physics, the thing that’s hard is not that the students are ignorant. It’s that they already know the answer—and it’s wrong,” says Steve Girvin, a physicist at Yale University. Newton’s first law claims (roughly) that objects in motion tend to stay in motion, but tell that to the guy trying to push a moving box full of books across the floor. Our “naïve physics” is actually closest to Aristotle’s 2,300-year-old theories, in which heavy objects fall faster than light ones and objects in motion ease to a stop unless you keep pushing them. Quantum mechanics may seem weird, but to Aristotle, Newton’s laws would have been just as head-spinning.

To get from Aristotle to Newton, you have to be able to imagine a world without friction. Luckily, that isn’t so hard; if you’ve ever played air hockey or laced up ice skates, you can vouch for Newton’s first law.

But what is the quantum equivalent of an air hockey table–an everyday object that provides us hands-on access to quantum physics? If there is one, I haven’t thought of it. Computer simulations may provide the next best thing, and physics educators like Kansas State University’s Dean Zollman are actively developing and testing new software that puts students into a (virtual) quantum world where they can actively manipulate the parameters of quantum systems and see how their tweaks play out.
“It’s certainly easier to be a student of quantum mechanics now that it was when I went through school,” says Zollman. “We had drawings in books, but visualizing things, even twenty-five years ago, was not the way most people went about teaching. And still images are still images—they don’t give you the same feeling, the same kind of understanding, that we really can do these days.”

Perhaps all of this should give us fresh respect for the scientists who discovered and codified the rules of quantum physics. “It was just incredibly difficult for classical physicists to make the leap from that worldview, which was confirmed by the things they saw in the everyday world around them, to understanding the strange implications of quantum mechanics,” says Girvin. “Every student today—90-100 years later—still has to make that same leap.” Each individual who aims to learn modern physics must personally recapitulate thousands of years of discovery.

And at the end of that road? “Practicing, professional people who have been doing this for decades still have arguments about what the results of the experiments will be,” says Girvin. There are no “native speakers” of quantum mechanics. “What is to be done about this?” asks Styer. “There are only two choices. We can either give up on understanding, or we can develop a new and more appropriate meaning for ‘understanding.’ I advocate the second choice.”

“Our minds evolved to find food and to avoid being eaten,” says Styer. “The fact that our minds ‘overevolved’ and allow us also to find beauty in sunsets and mountains, waterfalls and people; allow us to laugh and to love and to learn; allow us to explore unknown continents, and outer space, and (most bizarre of all) the atomic world, is a gift that we neither deserve nor (in many cases) appreciate. That we can make any progress at all in understanding quantum mechanics is surprising. We must not berate ourselves because our progress is imperfect. Instead, we must continue poking around, in joy and in wonder and sometimes in pain, exploring and building intuition concerning this strange and beautiful atomic world.”

Uncertainty principle

 
In quantum mechanics, the uncertainty principle is any of a variety of mathematical inequalities asserting a fundamental limit to the precision with which certain pairs of physical properties of a particle known as complementary variables, such as position x and momentum p, can be known simultaneously. For instance, in 1927, Werner Heisenberg stated that the more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa.[1] The formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard[2] later that year and by Hermann Weyl[3] in 1928:
 \sigma_{x}\sigma_{p} \geq \frac{\hbar}{2} ~~
(ħ is the reduced Planck constant).

The original heuristic argument that such a limit should exist was given by Heisenberg, after whom it is sometimes named the Heisenberg principle. This ascribes the uncertainty in the measurable quantities to the jolt-like disturbance triggered by the act of observation. Though widely repeated in textbooks, this physical argument is now known to be fundamentally misleading.[4][5] While the act of measurement does lead to uncertainty, the loss of precision is less than that predicted by Heisenberg's argument; the formal mathematical result remains valid, however.

Historically, the uncertainty principle has been confused[6][7] with a somewhat similar effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the systems. Heisenberg offered such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty.[8] It has since become clear, however, that the uncertainty principle is inherent in the properties of all wave-like systems,[4] and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems, and is not a statement about the observational success of current technology.[9] It must be emphasized that measurement does not mean only a process in which a physicist-observer takes part, but rather any interaction between classical and quantum objects regardless of any observer.[10]

Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number-phase uncertainty relations in superconducting[11] or quantum optics[12] systems. Applications dependent on the uncertainty principle for their operation include extremely low noise technology such as that required in gravitational-wave interferometers.[13]

Tuesday, July 29, 2014

House Republicans Pass Bill to Lower Taxes on the Rich and Raise Taxes on the Poor

By
Mon Jul. 28, 2014 2:12 PM EDT
 
So what are Republicans in the House of Representatives up to these days? According to Danny Vinik, they just passed a bill that would reduce taxes on the rich and raise them on the poor.
I know, I know: you're shocked. But in a way, I think this whole episode is even worse than Vinik makes it sound.

Here's the background: The child tax credit reduces your income tax by $1,000 for each child you have. It phases out for upper middle-income folks, but—and this is the key point—it phases out differently for singles and couples. The way the numbers sort out, it treats singles better than couples. This is the dreaded "marriage penalty," which is bad because we want to encourage people to get married, not discourage them.

So what did House Republicans do? Naturally, they raised the phase-out threshold for married couples so that well-off couples would get a higher benefit. They didn't have to do this, of course. They could have lowered the benefit for singles instead. Or they could have jiggled the numbers so that everyone got equal benefits but the overall result was revenue neutral.

But they didn't. They chose the path that would increase the benefit—and thus lower taxes—for married couples making high incomes. The bill also indexes the credit to inflation, which helps only those with incomes high enough to claim the full credit. And it does nothing to make permanent a reduction in the earnings threshold that benefits poor working families. Here's the net result:
If the House legislation became law, the Center for Budget and Policy Priorities estimated that a couple making $160,000 a year would receive a new tax cut of $2,200. On the other hand, the expiring provisions of the CTC would cause a single mother with two kids making $14,500 to lose her full CTC, worth $1,725.
So inflation indexing, which is verboten when the subject is the minimum wage, is A-OK when it comes to high-income taxpayers. And eliminating the marriage penalty is also a good idea—but again, only for high-income couples.
Which is crazy. I don't really have a firm opinion on whether the government should be in the business of encouraging marriage, but if it is, surely it should focus its attention on the people who need encouragement in the first place. And that is very decidedly not the upper middle class, which continues to get married at the same rate as ever.

So we have a deficit-busting tax cut. It's a cut only for the upper middle class. It's indexed for inflation, even though we're not allowed to index things like the minimum wage. And the poor are still scheduled for a tax increase in 2017 because this bill does nothing to stop it. It's a real quad-fecta. I wonder what Paul Ryan thinks of all this?

Bayesian inference

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Bayesian_inference Bayesian inference ( / ...