Search This Blog

Monday, September 26, 2022

Biodiversity action plan

From Wikipedia, the free encyclopedia

A biodiversity action plan (BAP) is an internationally recognized program addressing threatened species and habitats and is designed to protect and restore biological systems. The original impetus for these plans derives from the 1992 Convention on Biological Diversity (CBD). As of 2009, 191 countries have ratified the CBD, but only a fraction of these have developed substantive BAP documents.

The principal elements of a BAP typically include: (a) preparing inventories of biological information for selected species or habitats; (b) assessing the conservation status of species within specified ecosystems; (c) creation of targets for conservation and restoration; and (d) establishing budgets, timelines and institutional partnerships for implementing the BAP.

Species plans

Snow leopard, Pakistan, an endangered species

A fundamental method of engagement to a BAP is thorough documentation regarding individual species, with emphasis upon the population distribution and conservation status. This task, while fundamental, is highly daunting, since only an estimated ten percent of the world’s species are believed to have been characterized as of 2006, most of these unknowns being fungi, invertebrate animals, micro-organisms and plants. For many bird, mammal and reptile species, information is often available in published literature; however, for fungi, invertebrate animals, micro-organisms and many plants, such information may require considerable local data collection. It is also useful to compile time trends of population estimates in order to understand the dynamics of population variability and vulnerability. In some parts of the world complete species inventories are not realistic; for example, in the Madagascar dry deciduous forests, many species are completely undocumented and much of the region has never even been systematically explored by scientists.

A species plan component of a country’s BAP should ideally entail a thorough description of the range, habitat, behaviour, breeding and interaction with other species. Once a determination has been made of conservation status (e.g. rare, endangered, threatened, vulnerable), a plan can then be created to conserve and restore the species population to target levels. Examples of programmatic protection elements are: habitat restoration; protection of habitat from urban development; establishment of property ownership; limitations on grazing or other agricultural encroachment into habitat; reduction of slash-and-burn agricultural practises; outlawing killing or collecting the species; restrictions on pesticide use; and control of other environmental pollution. The plan should also articulate which public and private agencies should implement the protection strategy and indicate budgets available to execute this strategy.

Agricultural Plans

Agricultural practices can reduce the biodiversity of a region significantly. Biodiversity Action Plans for agricultural production are necessary to ensure a biodiversity friendly production. It has not been common for companies to integrate biodiversity aspects into their value chain, but some companies and organizations have shown overall efforts for implementing better practices.

An existing example for guidelines on biodiversity practices in agriculture is the Biodiversity Action Plan for spice production in India. By planning and implementing biodiversity friendly measures, farmers can mitigate negative impacts and support positive influences.

Habitat plans

Where a number of threatened species depend upon a specific habitat, it may be appropriate to prepare a habitat protection element of the Biodiversity Action Plan. Examples of such special habitats are: raised acidic bogs of Scotland; Waterberg Biosphere bushveld in South Africa; California’s coastal wetlands; and Sweden’s Stora Alvaret on the island of Öland. In this case also, careful inventories of species and also the geographic extent and quality of the habitat must be documented. Then, as with species plans, a program can be created to protect, enhance and/or restore habitat using similar strategies as discussed above under the species plans.

Specific countries

Some examples of individual countries which have produced substantive Biodiversity Action Plans follow. In every example the plans concentrate on plants and vertebrate animals, with very little attention to neglected groups such as fungi, invertebrate animals and micro-organisms, even though these are also part of biodiversity. Preparation of a country BAP may cost up to 100 million pounds sterling, with annual maintenance costs roughly ten percent of the initial cost. If plans took into account neglected groups, the cost would be higher. Obviously costs for countries with small geographical area or simplified ecosystems have a much lesser cost. For example, the St. Lucia BAP has been costed in the area of several million pounds sterling.

Australia

Australia has developed a detailed and rigorous Biodiversity Action Plan. This document estimates that the total number of indigenous species may be 560,000, many of which are endemic. A key element of the BAP is protection of the Great Barrier Reef, which is actually in a much higher state of health than most of the world’s reefs, Australia having one of the highest percentages of treated wastewater. There are however serious ongoing concerns, particularly in regards to the ongoing negative impact on water quality from land use practices. Also, climate change impact is feared to be significant.

Considerable analysis has been conducted on the sustainable yield of firewood production, a major threat to deforestation in most tropical countries. Biological inventory work; assessment of harvesting practices; and computer modeling of the dynamics of treefall, rot and harvest; have been carried out to adduce data on safe harvesting rates. Extensive research has also been conducted on the relation of brush clearance to biodiversity decline and impact on water tables; for example, these effects have been analyzed in the Toolibin Lake wetlands region.

New Zealand

New Zealand has ratified the Convention on Biological Diversity and as part of The New Zealand Biodiversity Strategy and Biodiversity Action Plans are implemented on ten separate themes.

Local government and some companies also have their own Biodiversity Action Plan.

St. Lucia

The St. Lucia BAP recognizes impacts of large numbers of tourists to the marine and coastal diversity of the Soufrière area of the country. The BAP specifically acknowledges that the carrying capacity for human use and water pollution discharge of sensitive reef areas was exceeded by the year 1990. The plan also addresses conservation of the historic island fishing industry. In 1992, several institutions in conjunction with native fishermen to produce a sustainable management plan for fishery resources, embodied in the Soufrière Marine Management Area.

The St. Lucia BAP features significant involvement from the University of the West Indies. Specific detailed attention is given to three species of threatened marine turtles, to a variety of vulnerable birds and a number of pelagic fishes and cetaceans. In terms of habitat conservation the plan focusses attention on the biologically productive mangrove swamps and notes that virtually all mangrove areas had already come under national protection by 1984.

Tanzania

The Tanzania national BAP addresses issues related to sustainable use of Lake Manyara, an extensive freshwater lake, whose usage by humans accelerated in the period 1950 to 1990. The designation of the Lake Manyara Biosphere Reserve under UNESCO's Man and the Biosphere Programme in 1981 combines conservation of the lake and surrounding high value forests with sustainable use of the wetlands area and simple agriculture. This BAP has united principal lake users in establishing management targets. The biosphere reserve has induced sustainable management of the wetlands, including monitoring groundwater and the chemistry of the escarpment water source.

United Kingdom

Fowlsheugh cliffs, Scotland, a protected seabird breeding habitat

The United Kingdom Biodiversity Action Plan covers not only terrestrial species associated with lands within the UK, but also marine species and migratory birds, which spend a limited time in the UK or its offshore waters. The UK plan encompasses "391 Species Action Plans, 45 Habitat Action Plans and 162 Local Biodiversity Action Plans with targeted actions". This plan is noteworthy because of its extensive detail, clarity of endangerment mechanisms, specificity of actions, follow up monitoring program and its inclusion of migrating cetaceans and pelagic birds.

On August 28, 2007, the new Biodiversity Action Plan (BAP) [launched in 1997] identified 1,149 species and 65 habitats in the UK that needed conservation and greater protection. The updated list included the hedgehog, house sparrow, grass snake and the garden tiger moth, while otters, bottlenose dolphins and red squirrels remained in need of habitat protection.

In May 2011, the European Commission adopted a new strategy to halt the loss of biodiversity and ecosystem services in the EU by 2020, in line with the commitments made at the 10th meeting of the Convention on Biological Diversity (CBD) held in Nagoya, Japan in 2010. In 2012 the UK BAP was succeeded by the 'UK Post-2010 Biodiversity Framework'.

UK BAP website

To support the work of the UK BAP, the UK BAP website was created by JNCC in 2001. The website contained information on the BAP process, hosted all relevant documents, and provided news and relevant updates. In March 2011, as part of the UK government’s review of websites, the UK BAP site was ‘closed’, and the core content was migrated into the JNCC website. Content from the original UK BAP website has been archived by the National Archives as snapshots from various dates (for example, UK BAP: copy March 2011; copy 2012).

United States

Twenty-six years prior to the international biodiversity convention, the United States had launched a national program to protect threatened species in the form of the 1966 Endangered Species Act. The legislation created broad authority for analyzing and listing species of concern, and mandated that Species Recovery Plans be created. Thus, while the USA is an unratified signer of the accord, arguably it has the longest track record and most comprehensive program of species protection of any country. There are about 7000 listed species (e.g. endangered or threatened), of which about half have approved Recovery Plans. While this number of species seems high compared to other countries, the value is rather indicative of the total number of species characterized, which is extremely large.

Uzbekistan

Five major divisions of habitat have been identified in Uzbekistan’s BAP: Wetlands (including reed habitat and man-made marsh); desert ecosystems (including sandy, stony and clay); steppes; riparian ecosystems; and mountain ecosystems. Over 27,000 species have been inventoried in the country, with a high rate of endemism for fishes and reptiles. Principal threats to biodiversity are related to human activities associated with overpopulation and generally related to agricultural intensification. Major geographic regions encompassed by the BAP include the Aral Sea Programme (threatened by long-term drainage and salination, largely for cotton production), the Nuratau Biosphere Reserve, and the Western Tien Shan Mountains Programme (in conjunction with Kazakhstan and Kyrgyzstan).

Criticism

Some developing countries criticize the emphasis of BAPs, because these plans inherently favour consideration of wildlife protection above food and industrial production, and in some cases may represent an obstacle to population growth. The plans are costly to produce, a fact which makes it difficult for many smaller countries and poorer countries to comply. In terms of the plans themselves, many countries have adopted pro-forma plans including little research and even less in the way of natural resource management. Almost universally, this has resulted in plans which emphasize plants and vertebrate animals, and which overlook fungi, invertebrate animals and micro-organisms. With regard to specific world regions, there is a notable lack of substantive participation by most of the Middle Eastern countries and much of Africa, the latter of which may be impeded by economic considerations of plan preparation. Some governments such as the European Union have diverted the purpose of a biodiversity action plan, and implemented the convention accord by a set of economic development policies with referencing certain ecosystems' protection.

Biodiversity planning: a new way of thinking

The definition of biodiversity under the Convention on Biological Diversity now recognises that biodiversity is a combination of ecosystem structure and function, as much as its components e.g. species, habitats and genetic resources. Article 2 states:

in addressing the boundless complexity of biological diversity, it has become conventional to think in hierarchical terms, from the genetic material within individual cells, building up through individual organisms, populations, species and communities of species, to the biosphere overall...At the same time, in seeking to make management intervention as efficient as possible, it is essential to take an holistic view of biodiversity and address the interactions that species have with each other and their non-living environment, i.e. to work from an ecological perspective.

The World Summit on Sustainable Development endorsed the objectives of the Convention on Biological Diversity to “achieve by 2010 a significant reduction of the current rate of biodiversity loss at the global, regional and national level as a contribution to poverty alleviation and to the benefit of life on Earth”. To achieve this outcome, biodiversity management will depend on maintaining structure and function.

Biodiversity is not singularly definable but may be understood via a series of management principles under BAPs, such as:

1. that biodiversity is conserved across all levels and scales – structure, function and composition are conserved at site, regional, state and national scales. 2. that examples of all ecological communities are adequately managed for conservation. 3. ecological communities are managed to support and enhance viable populations of animals, fungi, micro-organisms and plants and ecological functions.

Biodiversity and wildlife are not the same thing. The traditional focus on threatened species in BAPs is at odds with the principles of biodiversity management because, by the time species become threatened, the processes that maintain biodiversity are already compromised. Individual species are also regarded as generally poor indicators of biodiversity when it comes to actual planning. A species approach to BAPs only serves to identify and at best, apply a patch to existing problems. Increasingly, biodiversity planners are looking through the lens of ecosystem services. Critics of biodiversity are often confusing the need to protect species (their intrinsic value) with the need to maintain ecosystem processes, which ultimately maintain human society and do not compromise economic development. Hence, a core principle of biodiversity management, that traditional BAPs overlook, is the need to incorporate cultural, social and economic values in the process.

Modern day BAPs use an analysis of ecosystem services, key ecological process drivers, and use species as one of many indicators of change. They would seek to maintain structure and function by addressing habitat connectivity and resilience and may look at communities of species (threatened or otherwise) as one method of monitoring outcomes. Ultimately, species are the litmus test for biodiversity – viable populations of species can only be expected to exist in relatively intact habitats. However, the rationale behind BAPs is to "conserve and restore" biodiversity. One of the fastest developing areas of management is biodiversity offsets. The principles are in keeping with ecological impact assessment, which in turn depends on good quality BAPs for evaluation. Contemporary principles of biodiversity management, such as those produced by the Business Biodiversity Offsets Program are now integral to any plans to manage biodiversity, including the development of BAPs.

Bird flight

From Wikipedia, the free encyclopedia
 
A flock of domestic pigeons each in a different phase of its flap.

Bird flight is the primary mode of locomotion used by most bird species in which birds take off and fly. Flight assists birds with feeding, breeding, avoiding predators, and migrating.

Bird flight is one of the most complex forms of locomotion in the animal kingdom. Each facet of this type of motion, including hovering, taking off, and landing, involves many complex movements. As different bird species adapted over millions of years through evolution for specific environments, prey, predators, and other needs, they developed specializations in their wings, and acquired different forms of flight.

Various theories exist about how bird flight evolved, including flight from falling or gliding (the trees down hypothesis), from running or leaping (the ground up hypothesis), from wing-assisted incline running or from proavis (pouncing) behavior.

Basic mechanics of bird flight

Lift, Drag and Thrust

The fundamentals of bird flight are similar to those of aircraft, in which the aerodynamic forces sustaining flight are lift, drag, and thrust. Lift force is produced by the action of air flow on the wing, which is an airfoil. The airfoil is shaped such that the air provides a net upward force on the wing, while the movement of air is directed downward. Additional net lift may come from airflow around the bird's body in some species, especially during intermittent flight while the wings are folded or semi-folded (cf. lifting body).

Aerodynamic drag is the force opposite to the direction of motion, and hence the source of energy loss in flight. The drag force can be separated into two portions, lift-induced drag, which is the inherent cost of the wing producing lift (this energy ends up primarily in the wingtip vortices), and parasitic drag, including skin friction drag from the friction of air and body surfaces and form drag from the bird's frontal area. The streamlining of bird's body and wings reduces these forces. Unlike aircraft, which have engines to produce thrust, birds flap their wings with a given flapping amplitude and frequency to generate thrust.

Flight

Birds use mainly three types of flight, distinguished by wing motion.

Gliding flight

Lesser flamingos flying in formation.

When in gliding flight, the upward aerodynamic force is equal to the weight. In gliding flight, no propulsion is used; the energy to counteract the energy loss due to aerodynamic drag is either taken from the potential energy of the bird, resulting in a descending flight, or is replaced by rising air currents ("thermals"), referred to as soaring flight. For specialist soaring birds (obligate soarers), the decision to engage in flight are strongly related to atmospheric conditions that allow individuals to maximise flight-efficiency and minimise energetic costs.

Flapping flight

When a bird flaps, as opposed to gliding, its wings continue to develop lift as before, but the lift is rotated forward to provide thrust, which counteracts drag and increases its speed, which has the effect of also increasing lift to counteract its weight, allowing it to maintain height or to climb. Flapping involves two stages: the down-stroke, which provides the majority of the thrust, and the up-stroke, which can also (depending on the bird's wings) provide some thrust. At each up-stroke the wing is slightly folded inwards to reduce the energetic cost of flapping-wing flight. Birds change the angle of attack continuously within a flap, as well as with speed.

Bounding flight

Small birds often fly long distances using a technique in which short bursts of flapping are alternated with intervals in which the wings are folded against the body. This is a flight pattern known as "bounding" or "flap-bounding" flight. When the bird's wings are folded, its trajectory is primarily ballistic, with a small amount of body lift. The flight pattern is believed to decrease the energy required by reducing the aerodynamic drag during the ballistic part of the trajectory, and to increase the efficiency of muscle use.

Hovering

The ruby-throated hummingbird can beat its wings 52 times a second.
 
Several bird species use hovering, with one family specialized for hovering – the hummingbirds. True hovering occurs by generating lift through flapping alone, rather than by passage through the air, requiring considerable energy expenditure. This usually confines the ability to smaller birds, but some larger birds, such as a kite or osprey can hover for a short period of time. Although not a true hover, some birds remain in a fixed position relative to the ground or water by flying into a headwind. Hummingbirds, kestrels, terns and hawks use this wind hovering.

Most birds that hover have high aspect ratio wings that are suited to low speed flying. Hummingbirds are a unique exception – the most accomplished hoverers of all birds. Hummingbird flight is different from other bird flight in that the wing is extended throughout the whole stroke, which is a symmetrical figure of eight, with the wing producing lift on both the up- and down-stroke. Hummingbirds beat their wings at some 43 times per second, while others may be as high as 80 times per second.

Take-off and landing

A male bufflehead runs atop the water while taking off.
 
A magpie-goose taking off.
 

Take-off is one of the most energetically demanding aspects of flight, as the bird must generate enough airflow across the wing to create lift. Small birds do this with a simple upward jump. However, this technique does not work for larger birds, such as albatrosses and swans, which instead must take a running start to generate sufficient airflow. Large birds take off by facing into the wind, or, if they can, by perching on a branch or cliff so they can just drop off into the air.

Landing is also a problem for large birds with high wing loads. This problem is dealt with in some species by aiming for a point below the intended landing area (such as a nest on a cliff) then pulling up beforehand. If timed correctly, the airspeed once the target is reached is virtually nil. Landing on water is simpler, and the larger waterfowl species prefer to do so whenever possible, landing into wind and using their feet as skids. To lose height rapidly prior to landing, some large birds such as geese indulge in a rapid alternating series of sideslips or even briefly turning upside down in a maneuver termed whiffling.

Wings

A kea in flight.

The bird's forelimbs (the wings) are the key to flight. Each wing has a central vane to hit the wind, composed of three limb bones, the humerus, ulna and radius. The hand, or manus, which ancestrally was composed of five digits, is reduced to three digits (digit II, III and IV or I, II, III depending on the scheme followed), which serves as an anchor for the primaries, one of two groups of flight feathers responsible for the wing's airfoil shape. The other set of flight feathers, behind the carpal joint on the ulna, are called the secondaries. The remaining feathers on the wing are known as coverts, of which there are three sets. The wing sometimes has vestigial claws. In most species, these are lost by the time the bird is adult (such as the highly visible ones used for active climbing by hoatzin chicks), but claws are retained into adulthood by the secretarybird, screamers, finfoots, ostriches, several swifts and numerous others, as a local trait, in a few specimens.

Albatrosses have locking mechanisms in the wing joints that reduce the strain on the muscles during soaring flight.

Even within a species wing morphology may differ. For example, adult European Turtle Doves have been found to have longer but more rounded wings than juveniles – suggesting that juvenile wing morphology facilitates their first migrations, while selection for flight maneuverability is more important after the juveniles' first molt.

Female birds exposed to predators during ovulation produce chicks that grow their wings faster than chicks produced by predator-free females. Their wings are also longer. Both adaptations may make them better at avoiding avian predators.

Wing shape

Wing shapes

The shape of the wing is important in determining the flight capabilities of a bird. Different shapes correspond to different trade-offs between advantages such as speed, low energy use, and maneuverability. Two important parameters are the aspect ratio and wing loading. Aspect ratio is the ratio of wingspan to the mean of its chord (or the square of the wingspan divided by wing area). A high aspect ratio results in long narrow wings that are useful for endurance flight because they generate more lift. Wing loading is the ratio of weight to wing area.

Most kinds of bird wing can be grouped into four types, with some falling between two of these types. These types of wings are elliptical wings, high speed wings, high aspect ratio wings and slotted high-lift wings.

The budgerigar's wings, as seen on this pet female, allow it excellent manoeuvrability.

Elliptical wings

Technically, elliptical wings are those having elliptical (that is quarter ellipses) meeting conformally at the tips. The early model Supermarine Spitfire is an example. Some birds have vaguely elliptical wings, including the albatross wing of high aspect ratio. Although the term is convenient, it might be more precise to refer to curving taper with fairly small radius at the tips. Many small birds have a low aspect ratio with elliptical character (when spread), allowing for tight maneuvering in confined spaces such as might be found in dense vegetation. As such they are common in forest raptors (such as Accipiter hawks), and many passerines, particularly non-migratory ones (migratory species have longer wings). They are also common in species that use a rapid take off to evade predators, such as pheasants and partridges.

High speed wings

High speed wings are short, pointed wings that when combined with a heavy wing loading and rapid wingbeats provide an energetically expensive high speed. This type of flight is used by the bird with the fastest wing speed, the peregrine falcon, as well as by most of the ducks. Birds that make long migrations typically have this type of wing. The same wing shape is used by the auks for a different purpose; auks use their wings to "fly" underwater.

The peregrine falcon has the highest recorded dive speed of 242 miles per hour (389 km/h). The fastest straight, powered flight is the spine-tailed swift at 105 mph (169 km/h).

A roseate tern uses its low wing loading and high aspect ratio to achieve low speed flight.

High aspect ratio wings

High aspect ratio wings, which usually have low wing loading and are far longer than they are wide, are used for slower flight. This may take the form of almost hovering (as used by kestrels, terns and nightjars) or in soaring and gliding flight, particularly the dynamic soaring used by seabirds, which takes advantage of wind speed variation at different altitudes (wind shear) above ocean waves to provide lift. Low speed flight is also important for birds that plunge-dive for fish.

Soaring wings with deep slots

These wings are favored by larger species of inland birds, such as eagles, vultures, pelicans, and storks. The slots at the end of the wings, between the primaries, reduce the induced drag and wingtip vortices by "capturing" the energy in air flowing from the lower to upper wing surface at the tips, whilst the shorter size of the wings aids in takeoff (high aspect ratio wings require a long taxi to get airborne).

Coordinated formation flight

A wide variety of birds fly together in a symmetric V-shaped or a J-shaped coordinated formation, also referred to as an "echelon", especially during long-distance flight or migration. It is often assumed that birds resort to this pattern of formation flying in order to save energy and improve the aerodynamic efficiency. The birds flying at the tips and at the front would interchange positions in a timely cyclical fashion to spread flight fatigue equally among the flock members.

The wingtips of the leading bird in an echelon create a pair of opposite rotating line vortices. The vortices trailing a bird have an underwash part behind the bird, and at the same time they have an upwash on the outside, that hypothetically could aid the flight of a trailing bird. In a 1970 study the authors claimed that each bird in a V formation of 25 members can achieve a reduction of induced drag and as a result increase their range by 71%. It has also been suggested that birds' wings produce induced thrust at their tips, allowing for proverse yaw and net upwash at the last quarter of the wing. This would allow birds to overlap their wings and gain Newtonian lift from the bird in front.

Studies of waldrapp ibis show that birds spatially coordinate the phase of wing flapping and show wingtip path coherence when flying in V positions, thus enabling them to maximally utilise the available energy of upwash over the entire flap cycle. In contrast, birds flying in a stream immediately behind another do not have wingtip coherence in their flight pattern and their flapping is out of phase, as compared to birds flying in V patterns, so as to avoid the detrimental effects of the downwash due to the leading bird's flight.

Adaptations for flight

Diagram of the wing of a chicken, top view

The most obvious adaptation to flight is the wing, but because flight is so energetically demanding birds have evolved several other adaptations to improve efficiency when flying. Birds' bodies are streamlined to help overcome air-resistance. Also, the bird skeleton is hollow to reduce weight, and many unnecessary bones have been lost (such as the bony tail of the early bird Archaeopteryx), along with the toothed jaw of early birds, which has been replaced with a lightweight beak. The skeleton's breastbone has also adapted into a large keel, suitable for the attachment of large, powerful flight muscles. The vanes of each feather have hooklets called barbules that zip the vanes of individual feathers together, giving the feathers the strength needed to hold the airfoil (these are often lost in flightless birds). The barbules maintain the shape and function of the feather. Each feather has a major (greater) side and a minor (lesser) side, meaning that the shaft or rachis does not run down the center of the feather. Rather it runs longitudinally off the center with the lesser or minor side to the front and the greater or major side to the rear of the feather. This feather anatomy, during flight and flapping of the wings, causes a rotation of the feather in its follicle. The rotation occurs in the up motion of the wing. The greater side points down, letting air slip through the wing. This essentially breaks the integrity of the wing, allowing for a much easier movement in the up direction. The integrity of the wing is reestablished in the down movement, which allows for part of the lift inherent in bird wings. This function is most important in taking off or achieving lift at very low or slow speeds where the bird is reaching up and grabbing air and pulling itself up. At high speeds the air foil function of the wing provides most of the lift needed to stay in flight.

The large amounts of energy required for flight have led to the evolution of a unidirectional pulmonary system to provide the large quantities of oxygen required for their high respiratory rates. This high metabolic rate produces large quantities of radicals in the cells that can damage DNA and lead to tumours. Birds, however, do not suffer from an otherwise expected shortened lifespan as their cells have evolved a more efficient antioxidant system than those found in other animals.

Evolution of bird flight

Most paleontologists agree that birds evolved from small theropod dinosaurs, but the origin of bird flight is one of the oldest and most hotly contested debates in paleontology. The four main hypotheses are:

  • From the trees down, that birds' ancestors first glided down from trees and then acquired other modifications that enabled true powered flight.
  • From the ground up, that birds' ancestors were small, fast predatory dinosaurs in which feathers developed for other reasons and then evolved further to provide first lift and then true powered flight.
  • Wing-assisted incline running (WAIR), a version of "from the ground up" in which birds' wings originated from forelimb modifications that provided downforce, enabling the proto-birds to run up extremely steep slopes such as the trunks of trees.
  • Pouncing proavis, which posits that flight evolved by modification from arboreal ambush tactics.

There has also been debate about whether the earliest known bird, Archaeopteryx, could fly. It appears that Archaeopteryx had the avian brain structures and inner-ear balance sensors that birds use to control their flight. Archaeopteryx also had a wing feather arrangement like that of modern birds and similarly asymmetrical flight feathers on its wings and tail. But Archaeopteryx lacked the shoulder mechanism by which modern birds' wings produce swift, powerful upstrokes; this may mean that it and other early birds were incapable of flapping flight and could only glide. The presence of most fossils in marine sediments in habitats devoid of vegetation has led to the hypothesis that they may have used their wings as aids to run across the water surface in the manner of the basilisk lizards.

In March 2018, scientists reported that Archaeopteryx was likely capable of flight, but in a manner substantially different from that of modern birds.

From the trees down

It is unknown how well Archaeopteryx could fly, or if it could even fly at all.

This was the earliest hypothesis, encouraged by the examples of gliding vertebrates such as flying squirrels. It suggests that proto-birds like Archaeopteryx used their claws to clamber up trees and glided off from the tops.

Some recent research undermines the "trees down" hypothesis by suggesting that the earliest birds and their immediate ancestors did not climb trees. Modern birds that forage in trees have much more curved toe-claws than those that forage on the ground. The toe-claws of Mesozoic birds and of closely related non-avian theropod dinosaurs are like those of modern ground-foraging birds.

From the ground up

Feathers have been discovered in a variety of coelurosaurian dinosaurs (including the early tyrannosauroid Dilong). Modern birds are classified as coelurosaurs by nearly all palaeontologists. The original functions of feathers may have included thermal insulation and competitive displays. The most common version of the "from the ground up" hypothesis argues that bird's ancestors were small ground-running predators (rather like roadrunners) that used their forelimbs for balance while pursuing prey and that the forelimbs and feathers later evolved in ways that provided gliding and then powered flight. Another "ground upwards" theory argues the evolution of flight was initially driven by competitive displays and fighting: displays required longer feathers and longer, stronger forelimbs; many modern birds use their wings as weapons, and downward blows have a similar action to that of flapping flight. Many of the Archaeopteryx fossils come from marine sediments and it has been suggested that wings may have helped the birds run over water in the manner of the common basilisk.

Most recent attacks on the "from the ground up" hypothesis attempt to refute its assumption that birds are modified coelurosaurian dinosaurs. The strongest attacks are based on embryological analyses, which conclude that birds' wings are formed from digits 2, 3 and 4 (corresponding to the index, middle and ring fingers in humans; the first of a bird's 3 digits forms the alula, which they use to avoid stalling on low-speed flight, for example when landing); but the hands of coelurosaurs are formed by digits 1, 2 and 3 (thumb and first 2 fingers in humans). However these embryological analyses were immediately challenged on the embryological grounds that the "hand" often develops differently in clades that have lost some digits in the course of their evolution, and therefore bird's hands do develop from digits 1, 2 and 3.

Wing-assisted incline running

The wing-assisted incline running (WAIR) hypothesis was prompted by observation of young chukar chicks, and proposes that wings developed their aerodynamic functions as a result of the need to run quickly up very steep slopes such as tree trunks, for example to escape from predators. Note that in this scenario birds need downforce to give their feet increased grip. But early birds, including Archaeopteryx, lacked the shoulder mechanism that modern birds' wings use to produce swift, powerful upstrokes. Since the downforce that WAIR requires is generated by upstrokes, it seems that early birds were incapable of WAIR.

Pouncing proavis model

The proavis theory was first proposed by Garner, Taylor, and Thomas in 1999:

We propose that birds evolved from predators that specialized in ambush from elevated sites, using their raptorial hindlimbs in a leaping attack. Drag–based, and later lift-based, mechanisms evolved under selection for improved control of body position and locomotion during the aerial part of the attack. Selection for enhanced lift-based control led to improved lift coefficients, incidentally turning a pounce into a swoop as lift production increased. Selection for greater swooping range would finally lead to the origin of true flight.

The authors believed that this theory had four main virtues:

  • It predicts the observed sequence of character acquisition in avian evolution.
  • It predicts an Archaeopteryx-like animal, with a skeleton more or less identical to terrestrial theropods, with few adaptations to flapping, but very advanced aerodynamic asymmetrical feathers.
  • It explains that primitive pouncers (perhaps like Microraptor) could coexist with more advanced fliers (like Confuciusornis or Sapeornis) since they did not compete for flying niches.
  • It explains that the evolution of elongated rachis-bearing feathers began with simple forms that produced a benefit by increasing drag. Later, more refined feather shapes could begin to also provide lift.

Uses and loss of flight in modern birds

Birds use flight to obtain prey on the wing, for foraging, to commute to feeding grounds, and to migrate between the seasons. It is also used by some species to display during the breeding season and to reach safe isolated places for nesting.

Flight is more energetically expensive in larger birds, and many of the largest species fly by soaring and gliding (without flapping their wings) as much as possible. Many physiological adaptations have evolved that make flight more efficient.

Birds that settle on isolated oceanic islands that lack ground-based predators may over the course of evolution lose the ability to fly. One such example is the flightless cormorant, native to the Galápagos Islands. This illustrates both flight's importance in avoiding predators and its extreme demand for energy.

Bell test

From Wikipedia, the free encyclopedia

A Bell test, also known as Bell inequality test or Bell experiment, is a real-world physics experiment designed to test the theory of quantum mechanics in relation to Albert Einstein's concept of local realism. The experiments test whether or not the real world satisfies local realism, which requires the presence of some additional local variables (called "hidden" because they are not a feature of quantum theory) to explain the behavior of particles like photons and electrons. To date, all Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems behave.

According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. If a Bell test is performed in a laboratory and the results are not thus constrained, then they are inconsistent with the hypothesis that local hidden variables exist. Such results would support the position that there is no way to explain the phenomena of quantum mechanics in terms of a more fundamental description of nature that is more in line with the rules of classical physics.

Many types of Bell tests have been performed in physics laboratories, often with the goal of ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. This is known as "closing loopholes in Bell tests".

Overview

The Bell test has its origins in the debate between Einstein and other pioneers of quantum physics, principally Niels Bohr. One feature of the theory of quantum mechanics under debate was the meaning of Heisenberg's uncertainty principle. This principle states that if some information is known about a given particle, there is some other information about it that is impossible to know. An example of this is found in observations of the position and the momentum of a given particle. According to the uncertainty principle, a particle's momentum and its position cannot simultaneously be determined with arbitrarily high precision.

In 1935, Einstein, Boris Podolsky, and Nathan Rosen published a claim that quantum mechanics predicts that more information about a pair of entangled particles could be observed than Heisenberg's principle allowed, which would only be possible if information were travelling instantly between the two particles. This produces a paradox which came to be known as the "EPR paradox" after the three authors. It arises if any effect felt in one location is not the result of a cause that occurred in its past, relative to its location. This action at a distance would violate the theory of relativity, by allowing information between the two locations to travel faster than the speed of light.

Based on this, the authors concluded that the quantum wave function does not provide a complete description of reality. They suggested that there must be some local hidden variables at work in order to account for the behavior of entangled particles. In a theory of hidden variables, as Einstein envisaged it, the randomness and indeterminacy seen in the behavior of quantum particles would only be apparent. For example, if one knew the details of all the hidden variables associated with a particle, then one could predict both its position and momentum. The uncertainty that had been quantified by Heisenberg's principle would simply be an artifact of not having complete information about the hidden variables. Furthermore, Einstein argued that the hidden variables should obey the condition of locality: Whatever the hidden variables actually are, the behavior of the hidden variables for one particle should not be able to instantly affect the behavior of those for another particle far away. This idea, called the principle of locality, is rooted in intuition from classical physics that physical interactions do not propagate instantly across space. These ideas were the subject of ongoing debate between their proponents. In particular, Einstein himself did not approve of the way Podolsky had stated the problem in the famous EPR paper.

In 1964, John Stewart Bell proposed his now famous theorem, which states that no physical theory of hidden local variables can ever reproduce all the predictions of quantum mechanics. Implicit in the theorem is the proposition that the determinism of classical physics is fundamentally incapable of describing quantum mechanics. Bell expanded on the theorem to provide what would become the conceptual foundation of the Bell test experiments.

A typical experiment involves the observation of particles, often photons, in an apparatus designed to produce entangled pairs and allow for the measurement of some characteristic of each, such as their spin. The results of the experiment could then be compared to what was predicted by local realism and those predicted by quantum mechanics.

In theory, the results could be "coincidentally" consistent with both. To address this problem, Bell proposed a mathematical description of local realism that placed a statistical limit on the likelihood of that eventuality. If the results of an experiment violate Bell's inequality, local hidden variables can be ruled out as their cause. Later researchers built on Bell's work by proposing new inequalities that serve the same purpose and refine the basic idea in one way or another. Consequently, the term "Bell inequality" can mean any one of a number of inequalities satisfied by local hidden-variables theories; in practice, many present-day experiments employ the CHSH inequality. All these inequalities, like the original devised by Bell, express the idea that assuming local realism places restrictions on the statistical results of experiments on sets of particles that have taken part in an interaction and then separated.

To date, all Bell tests have supported the theory of quantum physics, and not the hypothesis of local hidden variables.

Conduct of optical Bell test experiments

In practice most actual experiments have used light, assumed to be emitted in the form of particle-like photons (produced by atomic cascade or spontaneous parametric down conversion), rather than the atoms that Bell originally had in mind. The property of interest is, in the best known experiments, the polarisation direction, though other properties can be used. Such experiments fall into two classes, depending on whether the analysers used have one or two output channels.

A typical CHSH (two-channel) experiment

Scheme of a "two-channel" Bell test
The source S produces pairs of "photons", sent in opposite directions. Each photon encounters a two-channel polariser whose orientation can be set by the experimenter. Emerging signals from each channel are detected and coincidences counted by the coincidence monitor CM.

The diagram shows a typical optical experiment of the two-channel kind for which Alain Aspect set a precedent in 1982. Coincidences (simultaneous detections) are recorded, the results being categorised as '++', '+−', '−+' or '−−' and corresponding counts accumulated.

Four separate subexperiments are conducted, corresponding to the four terms E(a, b) in the test statistic S (equation (2) shown below). The settings a, a′, b and b′ are generally in practice chosen to be 0, 45°, 22.5° and 67.5° respectively — the "Bell test angles" — these being the ones for which the quantum mechanical formula gives the greatest violation of the inequality.

For each selected value of a and b, the numbers of coincidences in each category (N++, N−−, N+− and N−+) are recorded. The experimental estimate for E(a, b) is then calculated as:

 

 

 

 

(1)

Once all four E’s have been estimated, an experimental estimate of the test statistic

 

 

 

 

(2)

can be found. If S is numerically greater than 2 it has infringed the CHSH inequality. The experiment is declared to have supported the QM prediction and ruled out all local hidden-variable theories.

A strong assumption has had to be made, however, to justify use of expression (2). It has been assumed that the sample of detected pairs is representative of the pairs emitted by the source. That this assumption may not be true comprises the fair sampling loophole.

The derivation of the inequality is given in the CHSH Bell test page.

A typical CH74 (single-channel) experiment

Setup for a "single-channel" Bell test
The source S produces pairs of "photons", sent in opposite directions. Each photon encounters a single channel (e.g. "pile of plates") polariser whose orientation can be set by the experimenter. Emerging signals are detected and coincidences counted by the coincidence monitor CM.

Prior to 1982 all actual Bell tests used "single-channel" polarisers and variations on an inequality designed for this setup. The latter is described in Clauser, Horne, Shimony and Holt's much-cited 1969 article as being the one suitable for practical use. As with the CHSH test, there are four subexperiments in which each polariser takes one of two possible settings, but in addition there are other subexperiments in which one or other polariser or both are absent. Counts are taken as before and used to estimate the test statistic.

 

 

 

 

(3)

where the symbol ∞ indicates absence of a polariser.

If S exceeds 0 then the experiment is declared to have infringed Bell's inequality and hence to have "refuted local realism". In order to derive (3), CHSH in their 1969 paper had to make an extra assumption, the so-called "fair sampling" assumption. This means that the probability of detection of a given photon, once it has passed the polarizer, is independent of the polarizer setting (including the 'absence' setting). If this assumption were violated, then in principle a local hidden-variable (LHV) model could violate the CHSH inequality.

In a later 1974 article, Clauser and Horne replaced this assumption by a much weaker, "no enhancement" assumption, deriving a modified inequality, see the page on Clauser and Horne's 1974 Bell test.

Experimental assumptions

In addition to the theoretical assumptions made, there are practical ones. There may, for example, be a number of "accidental coincidences" in addition to those of interest. It is assumed that no bias is introduced by subtracting their estimated number before calculating S, but that this is true is not considered by some to be obvious. There may be synchronisation problems — ambiguity in recognising pairs because in practice they will not be detected at exactly the same time.

Nevertheless, despite all these deficiencies of the actual experiments, one striking fact emerges: the results are, to a very good approximation, what quantum mechanics predicts. If imperfect experiments give us such excellent overlap with quantum predictions, most working quantum physicists would agree with John Bell in expecting that, when a perfect Bell test is done, the Bell inequalities will still be violated. This attitude has led to the emergence of a new sub-field of physics which is now known as quantum information theory. One of the main achievements of this new branch of physics is showing that violation of Bell's inequalities leads to the possibility of a secure information transfer, which utilizes the so-called quantum cryptography (involving entangled states of pairs of particles).

Notable experiments

Over the past thirty or so years, a great number of Bell test experiments have been conducted. The experiments are commonly interpreted to rule out local hidden-variable theories, and recently an experiment has been performed that is not subject to either the locality loophole or the detection loophole (Hensen et al.). An experiment free of the locality loophole is one where for each separate measurement and in each wing of the experiment, a new setting is chosen and the measurement completed before signals could communicate the settings from one wing of the experiment to the other. An experiment free of the detection loophole is one where close to 100% of the successful measurement outcomes in one wing of the experiment are paired with a successful measurement in the other wing. This percentage is called the efficiency of the experiment. Advancements in technology have led to a great variety of methods to test Bell-type inequalities.

Some of the best known and recent experiments include:

Freedman and Clauser (1972)

Stuart J. Freedman and John Clauser carried out the first actual Bell test, using Freedman's inequality, a variant on the CH74 inequality.

Aspect et al. (1982)

Alain Aspect and his team at Orsay, Paris, conducted three Bell tests using calcium cascade sources. The first and last used the CH74 inequality. The second was the first application of the CHSH inequality. The third (and most famous) was arranged such that the choice between the two settings on each side was made during the flight of the photons (as originally suggested by John Bell).

Tittel et al. (1998)

The Geneva 1998 Bell test experiments showed that distance did not destroy the "entanglement". Light was sent in fibre optic cables over distances of several kilometers before it was analysed. As with almost all Bell tests since about 1985, a "parametric down-conversion" (PDC) source was used.

Weihs et al. (1998): experiment under "strict Einstein locality" conditions

In 1998 Gregor Weihs and a team at Innsbruck, led by Anton Zeilinger, conducted an ingenious experiment that closed the "locality" loophole, improving on Aspect's of 1982. The choice of detector was made using a quantum process to ensure that it was random. This test violated the CHSH inequality by over 30 standard deviations, the coincidence curves agreeing with those predicted by quantum theory.

Pan et al. (2000) experiment on the GHZ state

This is the first of new Bell-type experiments on more than two particles; this one uses the so-called GHZ state of three particles.

Rowe et al. (2001): the first to close the detection loophole

The detection loophole was first closed in an experiment with two entangled trapped ions, carried out in the ion storage group of David Wineland at the National Institute of Standards and Technology in Boulder. The experiment had detection efficiencies well over 90%.

Go et al. (Belle collaboration): Observation of Bell inequality violation in B mesons

Using semileptonic B0 decays of Υ(4S) at Belle experiment, a clear violation of Bell Inequality in particle-antiparticle correlation is observed.

Gröblacher et al. (2007) test of Leggett-type non-local realist theories

A specific class of non-local theories suggested by Anthony Leggett is ruled out. Based on this, the authors conclude that any possible non-local hidden-variable theory consistent with quantum mechanics must be highly counterintuitive.

Salart et al. (2008): separation in a Bell Test

This experiment filled a loophole by providing an 18 km separation between detectors, which is sufficient to allow the completion of the quantum state measurements before any information could have traveled between the two detectors.

Ansmann et al. (2009): overcoming the detection loophole in solid state

This was the first experiment testing Bell inequalities with solid-state qubits (superconducting Josephson phase qubits were used). This experiment surmounted the detection loophole using a pair of superconducting qubits in an entangled state. However, the experiment still suffered from the locality loophole because the qubits were only separated by a few millimeters.

Giustina et al. (2013), Larsson et al (2014): overcoming the detection loophole for photons

The detection loophole for photons has been closed for the first time in a group by Anton Zeilinger, using highly efficient detectors. This makes photons the first system for which all of the main loopholes have been closed, albeit in different experiments.

Christensen et al. (2013): overcoming the detection loophole for photons

The Christensen et al. (2013) experiment is similar to that of Giustina et al. Giustina et al. did just four long runs with constant measurement settings (one for each of the four pairs of settings). The experiment was not pulsed so that formation of "pairs" from the two records of measurement results (Alice and Bob) had to be done after the experiment which in fact exposes the experiment to the coincidence loophole. This led to a reanalysis of the experimental data in a way which removed the coincidence loophole, and fortunately the new analysis still showed a violation of the appropriate CHSH or CH inequality. On the other hand, the Christensen et al. experiment was pulsed and measurement settings were frequently reset in a random way, though only once every 1000 particle pairs, not every time.

Hensen et al., Giustina et al., Shalm et al. (2015): "loophole-free" Bell tests

In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder. All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally.

The first published experiment by Hensen et al. used a photonic link to entangle the electron spins of two nitrogen-vacancy defect centres in diamonds 1.3 kilometers apart and measured a violation of the CHSH inequality (S = 2.42 ± 0.20). Thereby the local-realist hypothesis could be rejected with a p-value of 0.039, i.e. the chance of accidentally measuring the reported result in a local-realist world would be 3.9% at most.

Both simultaneously published experiments by Giustina et al. and Shalm et al. used entangled photons to obtain a Bell inequality violation with high statistical significance (p-value ≪10−6). Notably, the experiment by Shalm et al. also combined three types of (quasi-)random number generators to determine the measurement basis choices. One of these methods, detailed in an ancillary file, is the “'Cultural' pseudorandom source” which involved using bit strings from popular media such as the Back to the Future films, Star Trek: Beyond the Final Frontier, Monty Python and the Holy Grail, and the television shows Saved by the Bell and Dr. Who.

Schmied et al. (2016): Detection of Bell correlations in a many-body system

Using a witness for Bell correlations derived from a multi-partite Bell inequality, physicists at the University of Basel were able to conclude for the first time Bell correlation in a many-body system composed by about 480 atoms in a Bose-Einstein condensate. Even though loopholes were not closed, this experiment shows the possibility of observing Bell correlations in the macroscopic regime.

Handsteiner et al. (2017): "Cosmic Bell Test" - Measurement Settings from Milky Way Stars

Physicists led by David Kaiser of the Massachusetts Institute of Technology and Anton Zeilinger of the Institute for Quantum Optics and Quantum Information and University of Vienna performed an experiment that "produced results consistent with nonlocality" by measuring starlight that had taken 600 years to travel to Earth. The experiment “represents the first experiment to dramatically limit the space-time region in which hidden variables could be relevant.”

Rosenfeld et al. (2017): "Event-Ready" Bell test with entangled atoms and closed detection and locality loopholes

Physicists at the Ludwig Maximilian University of Munich and the Max Planck Institute of Quantum Optics published results from an experiment in which they observed a Bell inequality violation using entangled spin states of two atoms with a separation distance of 398 meters in which the detection loophole, the locality loophole, and the memory loophole were closed. The violation of S = 2.221 ± 0.033 rejected local realism with a significance value of P = 1.02×10−16 when taking into account 7 months of data and 55000 events or an upper bound of P = 2.57×10−9 from a single run with 10000 events.

The BIG Bell Test Collaboration (2018): "Challenging local realism with human choices"

An international collaborative scientific effort showed that human free will can be used to close the 'freedom-of-choice loophole'. This was achieved by collecting random decisions from humans instead of random number generators. Around 100,000 participants were recruited in order to provide sufficient input for the experiment to be statistically significant.

Rauch et al (2018): measurement settings from distant quasars

In 2018, an international team used light from two quasars (one whose light was generated approximately eight billion years ago and the other approximately twelve billion years ago) as the basis for their measurement settings. This experiment pushed the timeframe for when the settings could have been mutually determined to at least 7.8 billion years in the past, a substantial fraction of the superdeterministic limit (that being the creation of the universe 13.8 billion years ago).

The 2019 PBS Nova episode Einstein's Quantum Riddle documents this "cosmic Bell test" measurement, with footage of the scientific team on-site at the high-altitude Teide Observatory located in the Canary Islands.

Loopholes

Though the series of increasingly sophisticated Bell test experiments has convinced the physics community in general that local realism is untenable, local realism can never be excluded entirely. For example, the hypothesis of superdeterminism in which all experiments and outcomes (and everything else) are predetermined cannot be tested (it is unfalsifiable).

Up to 2015, the outcome of all experiments that violate a Bell inequality could still theoretically be explained by exploiting the detection loophole and/or the locality loophole. The locality (or communication) loophole means that since in actual practice the two detections are separated by a time-like interval, the first detection may influence the second by some kind of signal. To avoid this loophole, the experimenter has to ensure that particles travel far apart before being measured, and that the measurement process is rapid. More serious is the detection (or unfair sampling) loophole, because particles are not always detected in both wings of the experiment. It can be imagined that the complete set of particles would behave randomly, but instruments only detect a subsample showing quantum correlations, by letting detection be dependent on a combination of local hidden variables and detector setting.

Experimenters had repeatedly voiced that loophole-free tests could be expected in the near future. In 2015, a loophole-free Bell violation was reported using entangled diamond spins over a distance of 1.3 kilometres (1,300 m) and corroborated by two experiments using entangled photon pairs.

The remaining possible theories that obey local realism can be further restricted by testing different spatial configurations, methods to determine the measurement settings, and recording devices. It has been suggested that using humans to generate the measurement settings and observe the outcomes provides a further test. David Kaiser of MIT told the New York Times in 2015 that a potential weakness of the "loophole-free" experiments is that the systems used to add randomness to the measurement may be predetermined in a method that was not detected in experiments.

Detection loophole

A common problem in optical Bell tests is that only a small fraction of the emitted photons are detected. It is then possible that the correlations of the detected photons are unrepresentative: although they show a violation of a Bell inequality, if all photons were detected the Bell inequality would actually be respected. This was first noted by Pearle in 1970, who devised a local hidden variable model that faked a Bell violation by letting the photon be detected only if the measurement setting was favourable. The assumption that this does not happen, i.e., that the small sample is actually representative of the whole is called the fair sampling assumption.

To do away with this assumption it is necessary to detect a sufficiently large fraction of the photons. This is usually characterized in terms of the detection efficiency , defined as the probability that a photodetector detects a photon that arrives at it. Garg and Mermin showed that when using a maximally entangled state and the CHSH inequality an efficiency of is required for a loophole-free violation. Later Eberhard showed that when using a partially entangled state a loophole-free violation is possible for , which is the optimal bound for the CHSH inequality. Other Bell inequalities allow for even lower bounds. For example, there exists a four-setting inequality which is violated for .

Historically, only experiments with non-optical systems have been able to reach high enough efficiencies to close this loophole, such as trapped ions, superconducting qubits, and nitrogen-vacancy centers. These experiments were not able to close the locality loophole, which is easy to do with photons. More recently, however, optical setups have managed to reach sufficiently high detection efficiencies by using superconducting photodetectors, and hybrid setups have managed to combine the high detection efficiency typical of matter systems with the ease of distributing entanglement at a distance typical of photonic systems.

Locality loophole

One of the assumptions of Bell's theorem is the one of locality, namely that the choice of setting at a measurement site does not influence the result of the other. The motivation for this assumption is the theory of relativity, that prohibits communication faster than light. For this motivation to apply to an experiment, it needs to have space-like separation between its measurements events. That is, the time that passes between the choice of measurement setting and the production of an outcome must be shorter than the time it takes for a light signal to travel between the measurement sites.

The first experiment that strived to respect this condition was Alain Aspect's 1982 experiment. In it the settings were changed fast enough, but deterministically. The first experiment to change the settings randomly, with the choices made by a quantum random number generator, was Weihs et al.'s 1998 experiment. Scheidl et al. improved on this further in 2010 by conducting an experiment between locations separated by a distance of 144 km (89 mi).

Coincidence loophole

In many experiments, especially those based on photon polarization, pairs of events in the two wings of the experiment are only identified as belonging to a single pair after the experiment is performed, by judging whether or not their detection times are close enough to one another. This generates a new possibility for a local hidden variables theory to "fake" quantum correlations: delay the detection time of each of the two particles by a larger or smaller amount depending on some relationship between hidden variables carried by the particles and the detector settings encountered at the measurement station.

The coincidence loophole can be ruled out entirely simply by working with a pre-fixed lattice of detection windows which are short enough that most pairs of events occurring in the same window do originate with the same emission and long enough that a true pair is not separated by a window boundary.

Memory loophole

In most experiments, measurements are repeatedly made at the same two locations. A local hidden variable theory could exploit the memory of past measurement settings and outcomes in order to increase the violation of a Bell inequality. Moreover, physical parameters might be varying in time. It has been shown that, provided each new pair of measurements is done with a new random pair of measurement settings, that neither memory nor time inhomogeneity have a serious effect on the experiment.

Reaganomics

From Wikipedia, the free encyclopedia ...