Search This Blog

Sunday, April 19, 2015

Permian–Triassic extinction event


From Wikipedia, the free encyclopedia

Extinction intensity.svg
Cambrian Ordovician Silurian Devonian Carboniferous Permian Triassic Jurassic Cretaceous Paleogene Neogene
Marine extinction intensity during the Phanerozoic
%
Millions of years ago
P–Tr
Extinction intensity.svg Cambrian Ordovician Silurian Devonian Carboniferous Permian Triassic Jurassic Cretaceous Paleogene Neogene
Plot of extinction intensity (percentage of genera that are present in each interval of time but do not exist in the following interval) vs time in the past for marine genera.[1] Geological periods are annotated (by abbreviation and colour) above. The Permian–Triassic extinction event is the most significant event for marine genera, with just over 50% (according to this source) failing to survive. (source and image info)
























The Permian–Triassic (P–Tr) extinction event, colloquially known as the Great Dying or the Great Permian Extinction,[2][3] occurred about 252 Ma (million years) ago,[4] forming the boundary between the Permian and Triassic geologic periods, as well as the Paleozoic and Mesozoic eras. It is the Earth's most severe known extinction event, with up to 96% of all marine species[5][6] and 70% of terrestrial vertebrate species becoming extinct.[7] It is the only known mass extinction of insects.[8][9] Some 57% of all families and 83% of all genera became extinct. Because so much biodiversity was lost, the recovery of life on Earth took significantly longer than after any other extinction event,[5] possibly up to 10 million years.[10]

There is evidence for from one to three distinct pulses, or phases, of extinction.[7][11][12][13] There are several proposed mechanisms for the extinctions; the earlier phase was probably due to gradual environmental change, while the latter phase has been argued to be due to a catastrophic event. Suggested mechanisms for the latter include one or more large bolide impact events, massive volcanism, coal or gas fires and explosions from the Siberian Traps,[14] and a runaway greenhouse effect triggered by sudden release of methane from the sea floor due to methane clathrate dissociation or methane-producing microbes known as methanogens;[15] possible contributing gradual changes include sea-level change, increasing anoxia, increasing aridity, and a shift in ocean circulation driven by climate change.

Dating the extinction

Until 2000, it was thought that rock sequences spanning the Permian–Triassic boundary were too few and contained too many gaps for scientists to determine reliably its details.[20] Uranium-lead dating of zircons from rock sequences in multiple locations in southern China[4] dates the extinction to 252.28±0.08 Ma; an earlier study of rock sequences near Meishan in Changxing County of Zhejiang Province, China[21] dates the extinction to 251.4±0.3 Ma, with an ongoing elevated extinction rate occurring for some time thereafter.[11] A large (approximately 0.9%), abrupt global decrease in the ratio of the stable isotope 13C to that of 12C, coincides with this extinction,[18][22][23][24][25] and is sometimes used to identify the Permian–Triassic boundary in rocks that are unsuitable for radiometric dating.[26] Further evidence for environmental change around the P–Tr boundary suggests an 8 °C (14 °F) rise in temperature,[18] and an increase in CO
2
levels by 2000 ppm (by contrast, the concentration immediately before the industrial revolution was 280 ppm.)[18] There is also evidence of increased ultraviolet radiation reaching the earth causing the mutation of plant spores.[18]

It has been suggested that the Permian–Triassic boundary is associated with a sharp increase in the abundance of marine and terrestrial fungi, caused by the sharp increase in the amount of dead plants and animals fed upon by the fungi.[27] For a while this "fungal spike" was used by some paleontologists to identify the Permian–Triassic boundary in rocks that are unsuitable for radiometric dating or lack suitable index fossils, but even the proposers of the fungal spike hypothesis pointed out that "fungal spikes" may have been a repeating phenomenon created by the post-extinction ecosystem in the earliest Triassic.[27] The very idea of a fungal spike has been criticized on several grounds, including that: Reduviasporonites, the most common supposed fungal spore, was actually a fossilized alga;[18][28] the spike did not appear worldwide;[29][30] and in many places it did not fall on the Permian–Triassic boundary.[31] The algae, which were misidentified as fungal spores, may even represent a transition to a lake-dominated Triassic world rather than an earliest Triassic zone of death and decay in some terrestrial fossil beds.[32] Newer chemical evidence agrees better with a fungal origin for Reduviasporonites, diluting these critiques.[33]

Uncertainty exists regarding the duration of the overall extinction and about the timing and duration of various groups' extinctions within the greater process. Some evidence suggests that there were multiple extinction pulses[7] or that the extinction was spread out over a few million years, with a sharp peak in the last million years of the Permian.[31][34] Statistical analyses of some highly fossiliferous strata in Meishan, Sichuan Province southwest China, suggest that the main extinction was clustered around one peak.[11] Recent research shows that different groups became extinct at different times; for example, while difficult to date absolutely, ostracod and brachiopod extinctions were separated by 670 to 1170 thousand years.[35] In a well-preserved sequence in east Greenland, the decline of animals is concentrated in a period 10 to 60 thousand years long, with plants taking several hundred thousand additional years to show the full impact of the event.[36] An older theory, still supported in some recent papers,[37] is that there were two major extinction pulses 9.4 million years apart, separated by a period of extinctions well above the background level, and that the final extinction killed off only about 80% of marine species alive at that time while the other losses occurred during the first pulse or the interval between pulses. According to this theory one of these extinction pulses occurred at the end of the Guadalupian epoch of the Permian.[7][38] For example, all but one of the surviving dinocephalian genera died out at the end of the Guadalupian,[39] as did the Verbeekinidae, a family of large-size fusuline foraminifera.[40] The impact of the end-Guadalupian extinction on marine organisms appears to have varied between locations and between taxonomic groups—brachiopods and corals had severe losses.[41][42]

Extinction patterns

Marine organisms[edit]

Marine invertebrates suffered the greatest losses during the P–Tr extinction. In the intensively sampled south China sections at the P–Tr boundary, for instance, 286 out of 329 marine invertebrate genera disappear within the final 2 sedimentary zones containing conodonts from the Permian.[11]
Statistical analysis of marine losses at the end of the Permian suggests that the decrease in diversity was caused by a sharp increase in extinctions instead of a decrease in speciation.[44] The extinction primarily affected organisms with calcium carbonate skeletons, especially those reliant on stable CO2 levels to produce their skeletons,[45] for the increase in atmospheric CO2 led to ocean acidification.
Among benthic organisms, the extinction event multiplied background extinction rates, and therefore caused most damage to taxa that had a high background extinction rate (by implication, taxa with a high turnover).[46][47] The extinction rate of marine organisms was catastrophic.[11][48][49][50]
Surviving marine invertebrate groups include: articulate brachiopods (those with a hinge), which have suffered a slow decline in numbers since the P–Tr extinction; the Ceratitida order of ammonites; and crinoids ("sea lilies"), which very nearly became extinct but later became abundant and diverse.
The groups with the highest survival rates generally had active control of circulation, elaborate gas exchange mechanisms, and light calcification; more heavily calcified organisms with simpler breathing apparatus were the worst hit.[16][51] In the case of the brachiopods at least, surviving taxa were generally small, rare members of a diverse community.[52]

The ammonoids, which had been in a long-term decline for the 30 million years since the Roadian (middle Permian), suffered a selective extinction pulse 10 mya before the main event, at the end of the Capitanian stage. In this preliminary extinction, which greatly reduced disparity, that is the range of different ecological guilds, environmental factors were apparently responsible. Diversity and disparity fell further until the P–Tr boundary; the extinction here was non-selective, consistent with a catastrophic initiator. During the Triassic, diversity rose rapidly, but disparity remained low.[53]

The range of morphospace occupied by the ammonoids, that is the range of possible forms, shape or structure, became more restricted as the Permian progressed. Just a few million years into the Triassic, the original range of ammonoid structures was once again reoccupied, but the parameters were now shared differently among clades.[54]

Terrestrial invertebrates

The Permian had great diversity in insect and other invertebrate species, including the largest insects ever to have existed. The end-Permian is the only known mass extinction of insects,[8] with eight or nine insect orders becoming extinct and ten more greatly reduced in diversity. Palaeodictyopteroids (insects with piercing and sucking mouthparts) began to decline during the mid-Permian; these extinctions have been linked to a change in flora. The greatest decline occurred in the Late Permian and was probably not directly caused by weather-related floral transitions.[48]

Most fossil insect groups found after the Permian–Triassic boundary differ significantly from those that lived prior to the P–Tr extinction. With the exception of the Glosselytrodea, Miomoptera, and Protorthoptera, Paleozoic insect groups have not been discovered in deposits dating to after the P–Tr boundary. The caloneurodeans, monurans, paleodictyopteroids, protelytropterans, and protodonates became extinct by the end of the Permian. In well-documented Late Triassic deposits, fossils overwhelmingly consist of modern fossil insect groups.[8]

Terrestrial plants

Plant ecosystem response

The geological record of terrestrial plants is sparse, and based mostly on pollen and spore studies. Interestingly, plants are relatively immune to mass extinction, with the impact of all the major mass extinctions "insignificant" at a family level.[18] Even the reduction observed in species diversity (of 50%) may be mostly due to taphonomic processes.[18] However, a massive rearrangement of ecosystems does occur, with plant abundances and distributions changing profoundly and all the forests virtually disappearing;[18][55] the Palaeozoic flora scarcely survived this extinction.[56]

At the P–Tr boundary, the dominant floral groups changed, with many groups of land plants entering abrupt decline, such as Cordaites (gymnosperms) and Glossopteris (seed ferns).[57] Dominant gymnosperm genera were replaced post-boundary by lycophytes—extant lycophytes are recolonizers of disturbed areas.[58]

Palynological or pollen studies from East Greenland of sedimentary rock strata laid down during the extinction period indicate dense gymnosperm woodlands before the event. At the same time that marine invertebrate macrofauna are in decline these large woodlands die out and are followed by a rise in diversity of smaller herbaceous plants including Lycopodiophyta, both Selaginellales and Isoetales. Later on other groups of gymnosperms again become dominant but again suffer major die offs; these cyclical flora shifts occur a few times over the course of the extinction period and afterwards. These fluctuations of the dominant flora between woody and herbaceous taxa indicate chronic environmental stress resulting in a loss of most large woodland plant species.
The successions and extinctions of plant communities do not coincide with the shift in δ13C values, but occurs many years after.[30] The recovery of gymnosperm forests took 4–5 million years.[18]

Coal gap

No coal deposits are known from the Early Triassic, and those in the Middle Triassic are thin and low-grade.[19] This "coal gap" has been explained in many ways. It has been suggested that new, more aggressive fungi, insects and vertebrates evolved, and killed vast numbers of trees. These decomposers themselves suffered heavy losses of species during the extinction, and are not considered a likely cause of the coal gap.[19] It could simply be that all coal forming plants were rendered extinct by the P–Tr extinction, and that it took 10 million years for a new suite of plants to adapt to the moist, acid conditions of peat bogs.[19] On the other hand, abiotic factors (not caused by organisms), such as decreased rainfall or increased input of clastic sediments, may also be to blame.[18] Finally, it is also true that there are very few sediments of any type known from the Early Triassic, and the lack of coal may simply reflect this scarcity. This opens the possibility that coal-producing ecosystems may have responded to the changed conditions by relocating, perhaps to areas where we have no sedimentary record for the Early Triassic.[18] For example, in eastern Australia a cold climate had been the norm for a long period of time, with a peat mire ecosystem specialising to these conditions. Approximately 95% of these peat-producing plants went locally extinct at the P–Tr boundary;[59] Interestingly, coal deposits in Australia and Antarctica disappear significantly before the P–Tr boundary.[18]

Terrestrial vertebrates

There is enough evidence to indicate that over two-thirds of terrestrial labyrinthodont amphibians, sauropsid ("reptile") and therapsid ("mammal-like reptile") families became extinct. Large herbivores suffered the heaviest losses. All Permian anapsid reptiles died out except the procolophonids (testudines have anapsid skulls but are most often thought to have evolved later, from diapsid ancestors). Pelycosaurs died out before the end of the Permian. Too few Permian diapsid fossils have been found to support any conclusion about the effect of the Permian extinction on diapsids (the "reptile" group from which lizards, snakes, crocodilians, and dinosaurs [including birds] evolved).[60][61] Even the groups that survived suffered extremely heavy losses of species, and some terrestrial vertebrate groups very nearly became extinct at the end-Permian. Some of the surviving groups did not persist for long past this period, while others that barely survived went on to produce diverse and long-lasting lineages. Yet it took 30 million years for the terrestrial vertebrate fauna to fully recover both numerically and ecologically.[62]

Possible explanations of these patterns

An analysis of marine fossils from the Permian's final Changhsingian stage found that marine organisms with low tolerance for hypercapnia (high concentration of carbon dioxide) had high extinction rates, while the most tolerant organisms had very slight losses.

The most vulnerable marine organisms were those that produced calcareous hard parts (i.e., from calcium carbonate) and had low metabolic rates and weak respiratory systems—notably calcareous sponges, rugose and tabulate corals, calcite-depositing brachiopods, bryozoans, and echinoderms; about 81% of such genera became extinct. Close relatives without calcareous hard parts suffered only minor losses, for example sea anemones, from which modern corals evolved. Animals with high metabolic rates, well-developed respiratory systems, and non-calcareous hard parts had negligible losses—except for conodonts, in which 33% of genera died out.[63]

This pattern is consistent with what is known about the effects of hypoxia, a shortage but not a total absence of oxygen. However, hypoxia cannot have been the only killing mechanism for marine organisms. Nearly all of the continental shelf waters would have had to become severely hypoxic to account for the magnitude of the extinction, but such a catastrophe would make it difficult to explain the very selective pattern of the extinction. Models of the Late Permian and Early Triassic atmospheres show a significant but protracted decline in atmospheric oxygen levels, with no acceleration near the P–Tr boundary. Minimum atmospheric oxygen levels in the Early Triassic are never less than present day levels—the decline in oxygen levels does not match the temporal pattern of the extinction.[63]

Marine organisms are more sensitive to changes in CO2 levels than are terrestrial organisms for a variety of reasons. CO2 is 28 times more soluble in water than is oxygen. Marine animals normally function with lower concentrations of CO2 in their bodies than land animals, as the removal of CO2 in air-breathing animals is impeded by the need for the gas to pass through the respiratory system's membranes (lungs' alveolus, tracheae, and the like), even when CO2 diffuses more easily than Oxygen. In marine organisms, relatively modest but sustained increases in CO2 concentrations hamper the synthesis of proteins, reduce fertilization rates, and produce deformities in calcareous hard parts.[63] In addition, an increase in CO2 concentration is inevitably linked to ocean acidification, consistent with the preferential extinction of heavily calcified taxa and other signals in the rock record that suggest a more acidic ocean.[64]

It is difficult to analyze extinction and survival rates of land organisms in detail, because few terrestrial fossil beds span the Permian–Triassic boundary. Triassic insects are very different from those of the Permian, but a gap in the insect fossil record spans approximately 15 million years from the late Permian to early Triassic. The best-known record of vertebrate changes across the Permian–Triassic boundary occurs in the Karoo Supergroup of South Africa, but statistical analyses have so far not produced clear conclusions.[63] However, analysis of the fossil river deposits of the floodplains indicate a shift from meandering to braided river patterns, indicating an abrupt drying of the climate.[65] The climate change may have taken as little as 100,000 years, prompting the extinction of the unique Glossopteris flora and its herbivores, followed by the carnivorous guild.[66]

Biotic recovery

Earlier analyses indicated that life on Earth recovered quickly after the Permian extinctions, but this was mostly in the form of disaster taxa, opportunist organisms such as the hardy Lystrosaurus. Research published in 2006 indicates that the specialized animals that formed complex ecosystems, with high biodiversity, complex food webs and a variety of niches, took much longer to recover. It is thought that this long recovery was due to the successive waves of extinction, which inhibited recovery, and prolonged environmental stress to organisms, which continued into the Early Triassic. Research indicates that recovery did not begin until the start of the mid-Triassic, 4 to 6 million years after the extinction;[67] and some writers estimate that the recovery was not complete until 30 Ma after the P–Tr extinction, i.e. in the late Triassic.[7]

A study published in the journal Science [68] found that during the Great Extinction the oceans' surface temperatures reached 40 °C (104 °F), which explains why recovery took so long: it was simply too hot for life to survive.[69]

During the early Triassic (4 to 6 million years after the P–Tr extinction), the plant biomass was insufficient to form coal deposits, which implies a limited food mass for herbivores.[19] River patterns in the Karoo changed from meandering to braided, indicating that vegetation there was very sparse for a long time.[70]

Each major segment of the early Triassic ecosystem—plant and animal, marine and terrestrial—was dominated by a small number of genera, which appeared virtually worldwide, for example: the herbivorous therapsid Lystrosaurus (which accounted for about 90% of early Triassic land vertebrates) and the bivalves Claraia, Eumorphotis, Unionites and Promylina. A healthy ecosystem has a much larger number of genera, each living in a few preferred types of habitat.[57][71]

Disaster taxa took advantage of the devastated ecosystems and enjoyed a temporary population boom and increase in their territory. Microconchids are the dominant component of otherwise impoverished Early Triassic encrusting assemblages. For example: Lingula (a brachiopod); stromatolites, which had been confined to marginal environments since the Ordovician; Pleuromeia (a small, weedy plant); Dicroidium (a seed fern).[71][72][73]

Changes in marine ecosystems


Sessile filter feeders like this crinoid were significantly less abundant after the P–Tr extinction.

Prior to the extinction, about two-thirds of marine animals were sessile and attached to the sea floor but, during the Mesozoic, only about half of the marine animals were sessile while the rest were free-living. Analysis of marine fossils from the period indicated a decrease in the abundance of sessile epifaunal suspension feeders such as brachiopods and sea lilies and an increase in more complex mobile species such as snails, sea urchins and crabs.[74]

Before the Permian mass extinction event, both complex and simple marine ecosystems were equally common; after the recovery from the mass extinction, the complex communities outnumbered the simple communities by nearly three to one,[74] and the increase in predation pressure led to the Mesozoic Marine Revolution.

Bivalves were fairly rare before the P–Tr extinction but became numerous and diverse in the Triassic, and one group, the rudist clams, became the Mesozoic's main reef-builders. Some researchers think much of this change happened in the 5 million years between the two major extinction pulses.[75]

Crinoids ("sea lilies") suffered a selective extinction, resulting in a decrease in the variety of their forms.[76] Their ensuing adaptive radiation was brisk, and resulted in forms possessing flexible arms becoming widespread; motility, predominantly a response to predation pressure, also became far more prevalent.[77]

Land vertebrates


Lystrosaurus was by far the most abundant early Triassic land vertebrate.

Lystrosaurus, a pig-sized herbivorous dicynodont therapsid, constituted as much as 90% of some earliest Triassic land vertebrate fauna. Smaller carnivorous cynodont therapsids also survived, including the ancestors of mammals.
In the Karoo region of southern Africa, the therocephalians Tetracynodon, Moschorhinus and Ictidosuchoides survived, but do not appear to have been abundant in the Triassic.[78]

Archosaurs (which included the ancestors of dinosaurs and crocodilians) were initially rarer than therapsids, but they began to displace therapsids in the mid-Triassic. In the mid to late Triassic, the dinosaurs evolved from one group of archosaurs, and went on to dominate terrestrial ecosystems during the Jurassic and Cretaceous.[79] This "Triassic Takeover" may have contributed to the evolution of mammals by forcing the surviving therapsids and their mammaliform successors to live as small, mainly nocturnal insectivores; nocturnal life probably forced at least the mammaliforms to develop fur and higher metabolic rates,[80] while losing part of the differential color-sensitive retinal receptors reptilians and birds preserved.

Some temnospondyl amphibians made a relatively quick recovery, in spite of nearly becoming extinct. Mastodonsaurus and trematosaurians were the main aquatic and semiaquatic predators during most of the Triassic, some preying on tetrapods and others on fish.[81]

Land vertebrates took an unusually long time to recover from the P–Tr extinction; writer M. J. Benton estimated the recovery was not complete until 30 million years after the extinction, i.e. not until the Late Triassic, in which dinosaurs, pterosaurs, crocodiles, archosaurs, amphibians, and mammaliforms were abundant and diverse.[5]

Causes of the extinction event

Pinpointing the exact cause or causes of the Permian–Triassic extinction event is difficult, mostly because the catastrophe occurred over 250 million years ago, and much of the evidence that would have pointed to the cause either has been destroyed by now or is concealed deep within the Earth under many layers of rock. The sea floor is also completely recycled every 200 million years by the ongoing process of plate tectonics and seafloor spreading, leaving no useful indications beneath the ocean. With the fairly significant evidence that scientists have accumulated, several mechanisms have been proposed for the extinction event, including both catastrophic and gradual processes (similar to those theorized for the Cretaceous–Paleogene extinction event). The former group includes one or more large bolide impact events, increased volcanism, and sudden release of methane from the sea floor, either due to dissociation of methane hydrate deposits or metabolism of organic carbon deposits by methanogenic microbes. The latter group includes sea level change, increasing anoxia, and increasing aridity. Any hypothesis about the cause must explain the selectivity of the event, which affected organisms with calcium carbonate skeletons most severely; the long period (4 to 6 million years) before recovery started, and the minimal extent of biological mineralization (despite inorganic carbonates being deposited) once the recovery began.[45]

Impact event


Artist's impression of a major impact event: A collision between Earth and an asteroid a few kilometres in diameter would release as much energy as several million nuclear weapons detonating.

Evidence that an impact event may have caused the Cretaceous–Paleogene extinction event has led to speculation that similar impacts may have been the cause of other extinction events, including the P–Tr extinction, and therefore to a search for evidence of impacts at the times of other extinctions and for large impact craters of the appropriate age.

Reported evidence for an impact event from the P–Tr boundary level includes rare grains of shocked quartz in Australia and Antarctica;[82][83] fullerenes trapping extraterrestrial noble gases;[84] meteorite fragments in Antarctica;[85] and grains rich in iron, nickel and silicon, which may have been created by an impact.[86] However, the accuracy of most of these claims has been challenged.[87][88][89][90] Quartz from Graphite Peak in Antarctica, for example, once considered "shocked", has been re-examined by optical and transmission electron microscopy. The observed features were concluded to be not due to shock, but rather to plastic deformation, consistent with formation in a tectonic environment such as volcanism.[91]

An impact crater on the sea floor would be evidence of a possible cause of the P–Tr extinction, but such a crater would by now have disappeared. As 70% of the Earth's surface is currently sea, an asteroid or comet fragment is now perhaps more than twice as likely to hit ocean as it is to hit land. However, Earth has no ocean-floor crust more than 200 million years old, because the "conveyor belt" process of seafloor spreading and subduction destroys it within that time. Craters produced by very large impacts may be masked by extensive flood basalting from below after the crust is punctured or weakened.[92] Subduction should not, however, be entirely accepted as an explanation of why no firm evidence can be found: as with the K-T event, an ejecta blanket stratum rich in siderophilic elements (e.g. iridium) would be expected to be seen in formations from the time.

One attraction of large impact theories is that theoretically they could trigger other cause-considered extinction-paralleling phenomena,[clarification needed][93] such as the Siberian Traps eruptions (see below) as being either an impact site[94] or the antipode of an impact site.[93][95] The abruptness of an impact also explains why more species did not rapidly evolve to survive, as would be expected if the Permian-Triassic event had been slower and less global than a meteorite impact.

Possible impact sites

Several possible impact craters have been proposed as the site of an impact causing the P–Tr extinction, including the Bedout structure off the northwest coast of Australia[83] and the hypothesized Wilkes Land crater of East Antarctica.[96][97] In each of these cases, the idea that an impact was responsible has not been proven, and has been widely criticized. In the case of Wilkes Land, the age of this sub-ice geophysical feature is very uncertain – it may be later than the Permian–Triassic extinction.

The Araguainha crater has been most recently dated to 254.7 ± 2.5 million years ago, overlapping with estimates for the Permo-Triassic boundary.[98] Much of the local rock was oil shale. The estimated energy released by the Araguainha impact is insufficient to be a direct cause of the global mass extinction, but the colossal local earth tremors would have released huge amounts of oil and gas from the shattered rock. The resulting sudden global warming might have precipitated the Permian–Triassic extinction event.[99]

Volcanism

The final stages of the Permian had two flood basalt events. A small one, the Emeishan Traps in China, occurred at the same time as the end-Guadalupian extinction pulse, in an area close to the equator at the time.[100][101] The flood basalt eruptions that produced the Siberian Traps constituted one of the largest known volcanic events on Earth and covered over 2,000,000 square kilometres (770,000 sq mi) with lava.[102][103][104] The Siberian Traps eruptions were formerly thought to have lasted for millions of years, but recent research dates them to 251.2 ± 0.3 Ma — immediately before the end of the Permian.[11][105]

The Emeishan and Siberian Traps eruptions may have caused dust clouds and acid aerosols—which would have blocked out sunlight and thus disrupted photosynthesis both on land and in the photic zone of the ocean, causing food chains to collapse. These eruptions may also have caused acid rain when the aerosols washed out of the atmosphere. This may have killed land plants and molluscs and planktonic organisms which had calcium carbonate shells. The eruptions would also have emitted carbon dioxide, causing global warming. When all of the dust clouds and aerosols washed out of the atmosphere, the excess carbon dioxide would have remained and the warming would have proceeded without any mitigating effects.[93]

The Siberian Traps had unusual features that made them even more dangerous. Pure flood basalts produce fluid, low-viscosity lava and do not hurl debris into the atmosphere. It appears, however, that 20% of the output of the Siberian Traps eruptions was pyroclastic, i.e. consisted of ash and other debris thrown high into the atmosphere, increasing the short-term cooling effect.[106] The basalt lava erupted or intruded into carbonate rocks and into sediments that were in the process of forming large coal beds, both of which would have emitted large amounts of carbon dioxide, leading to stronger global warming after the dust and aerosols settled.[93]

There is doubt, however, about whether these eruptions were enough on their own to cause a mass extinction as severe as the end-Permian. Equatorial eruptions are necessary to produce sufficient dust and aerosols to affect life worldwide, whereas the much larger Siberian Traps eruptions were inside or near the Arctic Circle. Furthermore, if the Siberian Traps eruptions occurred within a period of 200,000 years, the atmosphere's carbon dioxide content would have doubled. Recent climate models suggest such a rise in CO2 would have raised global temperatures by 1.5 to 4.5 °C (2.7 to 8.1 °F), which is unlikely to cause a catastrophe as great as the P–Tr extinction.[93]

In January 2011, a team led by Stephen Grasby of the Geological Survey of Canada—Calgary, reported evidence that volcanism caused massive coal beds to ignite, possibly releasing more than 3 trillion tons of carbon. The team found ash deposits in deep rock layers near what is now Buchanan Lake. According to their article, "... coal ash dispersed by the explosive Siberian Trap eruption would be expected to have an associated release of toxic elements in impacted water bodies where fly ash slurries developed ...", and "Mafic megascale eruptions are long-lived events that would allow significant build-up of global ash clouds".[107][108] In a statement, Grasby said, "In addition to these volcanoes causing fires through coal, the ash it spewed was highly toxic and was released in the land and water, potentially contributing to the worst extinction event in earth history."[109]

Methane hydrate gasification

Scientists have found worldwide evidence of a swift decrease of about 1% in the 13C/12C isotope ratio in carbonate rocks from the end-Permian.[50][110] This is the first, largest, and most rapid of a series of negative and positive excursions (decreases and increases in 13C/12C ratio) that continues until the isotope ratio abruptly stabilised in the middle Triassic, followed soon afterwards by the recovery of calcifying life forms (organisms that use calcium carbonate to build hard parts such as shells).[16]
A variety of factors may have contributed to this drop in the 13C/12C ratio, but most turn out to be insufficient to account fully for the observed amount:[111]
  • Gases from volcanic eruptions have a 13C/12C ratio about 0.5 to 0.8% below standard (δ13C about −0.5 to −0.8%), but the amount required to produce a reduction of about 1.0% worldwide requires eruptions greater by orders of magnitude than any for which evidence has been found.[112]
  • A reduction in organic activity would extract 12C more slowly from the environment and leave more of it to be incorporated into sediments, thus reducing the 13C/12C ratio. Biochemical processes preferentially use the lighter isotopes, since chemical reactions are ultimately driven by electromagnetic forces between atoms and lighter isotopes respond more quickly to these forces. But a study of a smaller drop of 0.3 to 0.4% in 13C/12C (δ13C −3 to −4 ‰) at the Paleocene-Eocene Thermal Maximum (PETM) concluded that even transferring all the organic carbon (in organisms, soils, and dissolved in the ocean) into sediments would be insufficient: even such a large burial of material rich in 12C would not have produced the 'smaller' drop in the 13C/12C ratio of the rocks around the PETM.[112]
  • Buried sedimentary organic matter has a 13C/12C ratio 2.0 to 2.5% below normal (δ13C −2.0 to −2.5%). Theoretically, if the sea level fell sharply, shallow marine sediments would be exposed to oxidization. But 6,500–8,400 gigatons (1 gigaton = 109 metric tons) of organic carbon would have to be oxidized and returned to the ocean-atmosphere system within less than a few hundred thousand years to reduce the 13C/12C ratio by 1.0%. This is not thought to be a realistic possibility.[48]
  • Rather than a sudden decline in sea level, intermittent periods of ocean-bottom hyperoxia and anoxia (high-oxygen and low- or zero-oxygen conditions) may have caused the 13C/12C ratio fluctuations in the Early Triassic;[16] and global anoxia may have been responsible for the end-Permian blip. The continents of the end-Permian and early Triassic were more clustered in the tropics than they are now, and large tropical rivers would have dumped sediment into smaller, partially enclosed ocean basins at low latitudes. Such conditions favor oxic and anoxic episodes; oxic/anoxic conditions would result in a rapid release/burial, respectively, of large amounts of organic carbon, which has a low 13C/12C ratio because biochemical processes use the lighter isotopes more.[113] This, or another organic-based reason, may have been responsible for both this and a late Proterozoic/Cambrian pattern of fluctuating 13C/12C ratios.[16]
Other hypotheses include mass oceanic poisoning releasing vast amounts of CO2[114] and a long-term reorganisation of the global carbon cycle.[111]

The only proposed mechanism sufficient to cause a global 1.0% reduction in the 13C/12C ratio is the release of methane from methane clathrates,.[48] Carbon-cycle models confirm it would have had enough effect to produce the observed reduction.[111][114] Methane clathrates, also known as methane hydrates, consist of methane molecules trapped in cages of water molecules. The methane, produced by methanogens (microscopic single-celled organisms), has a 13C/12C ratio about 6.0% below normal (δ13C −6.0%). At the right combination of pressure and temperature, it gets trapped in clathrates fairly close to the surface of permafrost and in much larger quantities at continental margins (continental shelves and the deeper seabed close to them). Oceanic methane hydrates are usually found buried in sediments where the seawater is at least 300 m (980 ft) deep. They can be found up to about 2,000 m (6,600 ft) below the sea floor, but usually only about 1,100 m (3,600 ft) below the sea floor.[115]

The area covered by lava from the Siberian Traps eruptions is about twice as large as was originally thought, and most of the additional area was shallow sea at the time. The seabed probably contained methane hydrate deposits, and the lava caused the deposits to dissociate, releasing vast quantities of methane.[116] A vast release of methane might cause significant global warming, since methane is a very powerful greenhouse gas. Strong evidence suggests the global temperatures increased by about 6 °C (10.8 °F) near the equator and therefore by more at higher latitudes: a sharp decrease in oxygen isotope ratios (18O/16O);[117] the extinction of Glossopteris flora (Glossopteris and plants that grew in the same areas), which needed a cold climate, and its replacement by floras typical of lower paleolatitudes.[118]

However, the pattern of isotope shifts expected to result from a massive release of methane does not match the patterns seen throughout the early Triassic. Not only would a methane cause require the release of five times as much methane as postulated for the PETM,[16] but it would also have to be reburied at an unrealistically high rate to account for the rapid increases in the 13C/12C ratio (episodes of high positive δ13C) throughout the early Triassic, before being released again several times.[16]

Methanosarcina

A 2014 paper suggested a bacterial source of the carbon-cycle disruption: the methanogenic archaeal genus Methanosarcina. Three lines of chronology converge at 250 mya, supporting a scenario in which a single-gene transfer created a metabolic pathway for efficient methane production in these archaea, nourished by volcanic nickel. According to the theory, the resultant super-exponential bacterial bloom suddenly freed carbon from ocean-bottom organic sediments into the water and air.[119]

Anoxia

Evidence for widespread ocean anoxia (severe deficiency of oxygen) and euxinia (presence of hydrogen sulfide) is found from the Late Permian to the Early Triassic. Throughout most of the Tethys and Panthalassic Oceans, evidence for anoxia, including fine laminations in sediments, small pyrite framboids, high uranium/thorium ratios, and biomarkers for green sulfur bacteria, appear at the extinction event.[120] However, in some sites, including Meishan, China, and eastern Greenland, evidence for anoxia precedes the extinction.[121][122] Biomarkers for green sulfur bacteria, such as isorenieratane, the diagenetic product of isorenieratene, are widely used as indicators of photic zone euxinia, because green sulfur bacteria require both sunlight and hydrogen sulfide to survive. Their abundance in sediments from the P-T boundary indicates hydrogen sulfide was present even in shallow waters.This spread of toxic, oxygen-depleted water would have been devastating for marine life, producing widespread die-offs. Models of ocean chemistry show that anoxia and euxinia would have been closely associated with high levels of carbon dioxide.[123] This suggests that poisoning from hydrogen sulfide, anoxia, and hypercapnia acted together as a killing mechanism. Hypercapnia best explains the selectivity of the extinction, but anoxia and euxinia probably contributed to the high mortality of the event. The persistence of anoxia through the Early Triassic may explain the slow recovery of marine life after the extinction. Models also show that anoxic events can cause catastrophic hydrogen sulfide emissions into the atmosphere (see below).[124]

The sequence of events leading to anoxic oceans may have been triggered by carbon dioxide emissions from the eruption of the Siberian Traps.[124] In this scenario, warming from the enhanced greenhouse effect would reduce the solubility of oxygen in seawater, causing the concentration of oxygen to decline. Increased weathering of the continents due to warming and the acceleration of the water cycle would increase the riverine flux of phosphate to the ocean. This phosphate would have supported greater primary productivity in the surface oceans. This increase in organic matter production would have caused more organic matter to sink into the deep ocean, where its respiration would further decrease oxygen concentrations. Once anoxia became established, it would have been sustained by a positive feedback loop because deep water anoxia tends to increase the recycling efficiency of phosphate, leading to even higher productivity.

Hydrogen sulfide emissions

A severe anoxic event at the end of the Permian would have allowed sulfate-reducing bacteria to thrive, causing the production of large amounts of hydrogen sulfide in the anoxic ocean. Upwelling of this water may have released massive hydrogen sulfide emissions into the atmosphere. This would poison terrestrial plants and animals, as well as severely weaken the ozone layer, exposing much of the life that remained to fatal levels of UV radiation.[124]
Indeed, biomarker evidence for anaerobic photosynthesis by Chlorobiaceae (green sulfur bacteria) from the Late-Permian into the Early Triassic indicates that hydrogen sulfide did upwell into shallow waters because these bacteria are restricted to the photic zone and use sulfide as an electron donor.

This hypothesis has the advantage of explaining the mass extinction of plants, which would have added to the methane levels and should otherwise have thrived in an atmosphere with a high level of carbon dioxide. Fossil spores from the end-Permian further support the theory:[citation needed] many show deformities that could have been caused by ultraviolet radiation, which would have been more intense after hydrogen sulfide emissions weakened the ozone layer.

The supercontinent Pangaea


Map of Pangaea showing where today's continents were at the Permian–Triassic boundary

About halfway through the Permian (in the Kungurian age of the Permian's Cisuralian epoch), all the continents joined to form the supercontinent Pangaea, surrounded by the superocean Panthalassa, although blocks that are now parts of Asia did not join the supercontinent until very late in the Permian.[125] This configuration severely decreased the extent of shallow aquatic environments, the most productive part of the seas, and exposed formerly isolated organisms of the rich continental shelves to competition from invaders. Pangaea's formation would also have altered both oceanic circulation and atmospheric weather patterns, creating seasonal monsoons near the coasts and an arid climate in the vast continental interior.[citation needed]

Marine life suffered very high but not catastrophic rates of extinction after the formation of Pangaea (see the diagram "Marine genus biodiversity" at the top of this article)—almost as high as in some of the "Big Five" mass extinctions. The formation of Pangaea seems not to have caused a significant rise in extinction levels on land, and, in fact, most of the advance of the therapsids and increase in their diversity seems to have occurred in the late Permian, after Pangaea was almost complete. So it seems likely that Pangaea initiated a long period of increased marine extinctions, but was not directly responsible for the "Great Dying" and the end of the Permian.

Microbes

According to a theory published in 2014 (see also above), a genus of anaerobic methanogenic archaea known as Methanosarcina may have been largely responsible for the event.[126] Evidence suggests that these microbes acquired a new metabolic pathway via gene transfer at about that time, enabling them to efficiently metabolize acetate into methane. This would have led to their exponential reproduction, allowing them to rapidly consume vast deposits of organic carbon that had accumulated in marine sediment. The result would have been a sharp buildup of methane and carbon dioxide in the Earth's oceans and atmosphere. Massive volcanism facilitated this process by releasing large amounts of nickel, a scarce metal which is a cofactor for one of the enzymes involved in producing methane.[119]

Combination of causes

Possible causes supported by strong evidence appear to describe a sequence of catastrophes, each one worse than the last: the Siberian Traps eruptions were bad enough in their own right, but because they occurred near coal beds and the continental shelf, they also triggered very large releases of carbon dioxide and methane.[63] The resultant global warming may have caused perhaps the most severe anoxic event in the oceans' history: according to this theory, the oceans became so anoxic, anaerobic sulfur-reducing organisms dominated the chemistry of the oceans and caused massive emissions of toxic hydrogen sulfide.[63]

However, there may be some weak links in this chain of events: the changes in the 13C/12C ratio expected to result from a massive release of methane do not match the patterns seen throughout the early Triassic;[16] and the types of oceanic thermohaline circulation, which may have existed at the end of the Permian, are not likely to have supported deep-sea anoxia.[127]

Sixth extinction, rivaling that of the dinosaurs, should join the big five, scientists say

Earth has seen its share of catastrophes, the worst being the “big five” mass extinctions scientists traditionally talk about. Now, paleontologists are arguing that a sixth extinction, 260 million years ago, at the end of a geological age called the Capitanian, deserves to be a member of the exclusive club. In a new study, they offer evidence for a massive die-off in shallow, cool waters in what is now Norway. That finding, combined with previous evidence of extinctions in tropical waters, means that the Capitanian was a global catastrophe.

“It’s the first time we can say this is a true global extinction,” says David Bond, a paleontologist at the University of Hull in the United Kingdom. Bond led a study that was published online this week in the Geological Society of America Bulletin. He adds that in magnitude, the Capitanian event was on par with the dinosaur-killing extinction 66 million year ago. “I’d put this up there with it, albeit with slightly less attractive victims,” Bond says.

Interest in the Capitanian began in the early 1990s, when paleontologists found evidence for fossil extinctions in rock formations in China. The rocks had originally formed on the floor of a shallow tropical sea. Most foraminifera—tiny, shelled protozoans—were wiped out, along with many species of clamlike brachiopods. There was also a possible trigger to blame: a set of ancient volcanic outbursts in China that solidified into rocks called the Emeishan Traps. The hot flood basalts would have released huge amounts of sulfur and carbon dioxide, potentially causing a quick global chill followed by a longer period of global warming. The gases could have also driven acidification and oxygen depletion in the oceans. Many scientists think that a similar massive burst of volcanic activity in Siberia touched off the biggest extinction of all time, just 8 million years later, at the end of the Permian period.
But the older, less studied Capitanian extinction has been dogged by criticism that it may have been a regional event, or just part of a gradual trend en route to the larger Permian extinction. Some of those criticisms may be quelled by the new evidence, which comes from Spitsbergen, the largest island in the Svalbard archipelago off the coast of Norway in the Arctic Ocean. There, Bond and his colleagues examined chert rocks—silica formations, created by the skeletons of dead sponges, that also contain many species of brachiopods. At the time, the rocks would have been forming in tens of meters of cooler water at midlatitudes. But at a stark point in the rock record, the fossils disappeared.

“They all drop out,” says study co-author Paul Wignall, a paleontologist at the University of Leeds in the United Kingdom. “It’s like a blackout zone and there’s nothing around.” A little further in the rock record, a few brachiopod species recover, Wignall says, and then mollusks take over en masse, before the devastation of the Permian extinction, 8 million years later.

The research team had a hard time tying the new record to the same moment in fossil records in China. Isotopic dating systems are too uncertain to provide a helpful absolute date. Another standard biostratigraphic method—linking the timing of different rock layers by the comings and goings of fossilized teeth of tiny eellike creatures called conodonts—also couldn’t be used, because the same species didn’t live in cool and tropical waters. Instead, the team points out that similar swings in different isotopes’ levels, occurring in both parts of the world, suggest that the two regions were experiencing the same changes in ocean chemistry at the same time.

That’s part of the problem, says Matthew Clapham, a paleontologist at the University of California, Santa Cruz. He thinks the study team has dated something a bit younger—maybe 255 million years old. “They’ve definitely identified a real event, which is really interesting,” he says. “Their age model is less convincing.” He also says that recent work in China on the extent of the Capitanian extinction across different species shows it may not have been quite as bad as originally thought. Clapham thinks the Capitanian is probably 30th or 40th in the hierarchy of extinctions, not sixth.

But Bond is still convinced that the Capitanian will go down in the history books as one of the world’s worst. “You have to change a lot of people’s minds,” he says. He is now studying fossil records in Russia and Greenland that could further buttress his arguments for a global disaster. Clapham, too, wants to see more work done on this enigmatic stretch of Earth history. “It’s a very mysterious event—it’s an interesting thing to study,” he says.
 
Posted in Earth, Paleontology

Science| DOI: 10.1126/science.aab2504

Friday, April 17, 2015

Potential applications of graphene


From Wikipedia, the free encyclopedia

Potential graphene applications include lightweight, thin, flexible, yet durable display screens, electric circuits, and solar cells, as well as various medical, chemical and industrial processes enhanced or enabled by the use of new graphene materials.[1]

In 2008, graphene produced by exfoliation was one of the most expensive materials on Earth, with a sample the area of a cross section of a human hair costing more than $1,000 as of April 2008 (about $100,000,000/cm2).[2] Since then, exfoliation procedures have been scaled up, and now companies sell graphene in large quantities.[3] The price of epitaxial graphene on Silicon carbide is dominated by the substrate price, which was approximately $100/cm2 as of 2009.

Hong and his team in South Korea pioneered the synthesis of large-scale graphene films using chemical vapour deposition (CVD) on thin nickel layers, which triggered research on practical applications,[4] with wafer sizes up to 30 inches (760 mm) reported.[5]

In 2013, the European Union made a €1 billion grant to be used for research into potential graphene applications.[6]

In 2013 the Graphene Flagship consortium formed, including Chalmers University of Technology and seven other European universities and research centers, along with Nokia.[7]

Medicine

Tissue engineering

Graphene has been investigated for tissue engineering. It has been used as a reinforcing agent to improve the mechanical properties of biodegradable polymeric nanocomposites for engineering bone tissue applications.[8] Dispersion of low weight % of graphene (~0.02 wt.%) increased in compressive and flexural mechanical properties of polymeric nanocomposites.

Contrast agents/bioimaging

Functionalized and surfactant dispersed graphene solutions have been designed as blood pool MRI contrast agents.[9] Additionally, iodine and manganese incorporating graphene nanoparticles have served as multimodal MRI-CT contrast agents.[10] Graphene micro- and nano-particles have served as contrast agents for photoacoustic and thermoacoustic tomography.[11] Graphene has also been reported to be efficiently taken up cancerous cells thereby enabling the design of drug delivery agents for cancer therapy.[12] Graphene nanoparticles of various morphologies are non-toxic at low concentrations and do not alter stem cell differentiation suggesting that they may be safe to use for biomedical applications.[13]

Polymerase chain reaction

Graphene is reported to have enhanced PCR by increasing the yield of DNA product.[14] Experiments revealed that graphene's thermal conductivity could be the main factor behind this result. Graphene yields DNA product equivalent to positive control with up to 65% reduction in PCR cycles.[citation needed]

Devices

Graphene's modifiable chemistry, large surface area, atomic thickness and molecularly gatable structure make antibody-functionalized graphene sheets excellent candidates for mammalian and microbial detection and diagnosis devices.[15] Graphene is so thin water has near-perfect wetting transparency which is an important property particularly in developing bio-sensor applications.[16] This means that a sensors coated in graphene have as much contact with an aqueous system as an uncoated sensor, while it remains protected mechanically from its environment.

Energy of the electrons with wavenumber k in graphene, calculated in the Tight Binding-approximation. The unoccupied (occupied) states, colored in blue–red (yellow–green), touch each other without energy gap exactly at the above-mentioned six k-vectors.

Integration of graphene (thickness of 0.34 nm) layers as nanoelectrodes into a nanopore[17] can potentially solve a bottleneck for nanopore-based single-molecule DNA sequencing.

On November 20, 2013 the Bill & Melinda Gates Foundation awarded $100,000 'to develop new elastic composite materials for condoms containing nanomaterials like graphene'.[18]

In 2014, graphene-based, transparent (across infrared to ultraviolet frequencies), flexible, implantable medical sensor microarrays were announced that allow the viewing of brain tissue hidden by implants. Optical transparency was >90%. Applications demonstrated include optogenetic activation of focal cortical areas, in vivo imaging of cortical vasculature via fluorescence microscopy and 3D optical coherence tomography.[19][20]

Drug delivery[edit]

  • Researchers in Monash University discovered that the sheet of graphene oxide can be transformed into liquid crystal droplets spontaneously – like a polymer - simply by placing the material in a solution and manipulating the pH. The graphene droplets change their structure at the presence of an external magnetic field. This finding opens the door for potential use of carrying drug in the graphene droplets and drug release upon reaching the targeted tissue when the droplets change shape under the magnetic field. Another possible application is in disease detection if graphene is found to change shape at the presence of certain disease markers such as toxins.[21][22]
  • A graphene ‘flying carpet’ was demonstrated to deliver two anti-cancer drugs sequentially to the lung tumor cells (A549 cell) in a mouse model. Doxorubicin (DOX) is embedded onto the graphene sheet, while the molecules of tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) are linked to the nanostructure via short peptide chains. Injected intravenously, the graphene strips with the drug playload preferentially concentrate to the cancer cells due to common blood vessel leakage around the tumor. Receptors on the cancer cell membrane bind TRAIL and cell surface enzymes clip the peptide thus release the drug onto the cell surface. Without the bulky TRAIL, the graphene strips with the embedded DOX are swallowed into the cells. The intracellular acidic environment promotes DOX’s release from graphene. TRAIL on the cell surface triggers the apoptosis while DOX attacks the nucleus. These two drugs work synergistically and were found to be more effective than either drug alone.[23][24]

Biomicrorobotics

Researchers demonstrated a nanoscale biomicrorobot (or cytobot) made by cladding a living endospore cell with graphene quantum dots. The device acted as a humidity sensor.[25]

Testing

In 2014 a graphene based blood glucose testing product was announced.[26][27]

Electronics

For integrated circuits, graphene has a high carrier mobility, as well as low noise, allowing it to be used as the channel in a field-effect transistor. Single sheets of graphene are hard to produce and even harder to make on an appropriate substrate.[28]

In 2008, the smallest transistor so far, one atom thick, 10 atoms wide was made of graphene.[29] IBM announced in December 2008 that they had fabricated and characterized graphene transistors operating at GHz frequencies.[30] In May 2009, an n-type transistor was announced meaning that both n and p-type graphene transistors had been created.[31][32] A functional graphene integrated circuit was demonstrated – a complementary inverter consisting of one p- and one n-type graphene transistor.[33] However, this inverter suffered from a very low voltage gain.

According to a January 2010 report,[34] graphene was epitaxially grown on SiC in a quantity and with quality suitable for mass production of integrated circuits. At high temperatures, the quantum Hall effect could be measured in these samples. IBM built 'processors' using 100 GHz transistors on 2-inch (51 mm) graphene sheets.[35]

In June 2011, IBM researchers announced that they had succeeded in creating the first graphene-based integrated circuit, a broadband radio mixer.[36] The circuit handled frequencies up to 10 GHz. Its performance was unaffected by temperatures up to 127 °C.

In June 2013 an 8 transistor 1.28 GHz ring oscillator circuit was described.[37]

Transistors

Graphene exhibits a pronounced response to perpendicular external electric fields, potentially forming field-effect transistors (FET). A 2004 paper documented FETs with an on-off ratio of ~30 at room temperature.[citation needed] A 2006 paper announced an all-graphene planar FET with side gates.[38] Their devices showed changes of 2% at cryogenic temperatures. The first top-gated FET (on–off ratio of <2 2007.="" class="reference" demonstrated="" id="cite_ref-39" in="" sup="" was="">[39]
Graphene nanoribbons may prove generally capable of replacing silicon as a semiconductor.[40]
US patent 7015142  for graphene-based electronics was issued in 2006. In 2008, researchers at MIT Lincoln Lab produced hundreds of transistors on a single chip[41] and in 2009, very high frequency transistors were produced at Hughes Research Laboratories.[42]

A 2008 paper demonstrated a switching effect based on a reversible chemical modification of the graphene layer that gives an on–off ratio of greater than six orders of magnitude. These reversible switches could potentially be employed in nonvolatile memories.[43]

In 2009, researchers demonstrated four different types of logic gates, each composed of a single graphene transistor.[44]

Practical uses for these circuits are limited by the very small voltage gain they exhibit. Typically, the amplitude of the output signal is about 40 times less than that of the input signal. Moreover, none of these circuits operated at frequencies higher than 25 kHz.

In the same year, tight-binding numerical simulations[45] demonstrated that the band-gap induced in graphene bilayer field effect transistors is not sufficiently large for high-performance transistors for digital applications, but can be sufficient for ultra-low voltage applications, when exploiting a tunnel-FET architecture.[46]

In February 2010, researchers announced transistors with an on/off rate of 100 gigahertz, far exceeding the rates of previous attempts, and exceeding the speed of silicon transistors with an equal gate length. The 240 nm devices were made with conventional silicon-manufacturing equipment.[47][48][49]

In November 2011, researchers used 3d printing (additive manufacturing) as a method for fabricating graphene devices.[50]

In 2013, researchers demonstrated graphene's high mobility in a detector that allows broad band frequency selectivity ranging from the THz to IR region (0.76–33 THz)[51] A separate group created a terahertz-speed transistor with bistable characteristics, which means that the device can spontaneously switch between two electronic states. The device consists of two layers of graphene separated by an insulating layer of boron nitride a few atomic layers thick. Electrons move through this barrier by quantum tunneling. These new transistors exhibit “negative differential conductance,” whereby the same electrical current flows at two different applied voltages.[52]

Graphene does not have an energy band-gap, which presents a hurdle for its applications in digital logic gates. The efforts to induce a band-gap in graphene via quantum confinement or surface functionalization have not resulted in a breakthrough. The negative differential resistance experimentally observed in graphene field-effect transistors of "conventional" design allows for construction of viable non-Boolean computational architectures with the gap-less graphene. The negative differential resistance — observed under certain biasing schemes — is an intrinsic property of graphene resulting from its symmetric band structure. The results present a conceptual change in graphene research and indicate an alternative route for graphene's applications in information processing.[53]

In 2013 researchers reported the creation of transistors printed on flexible plastic that operate at 25 gigahertz, sufficient for communications circuits and that can be fabricated at scale. The researchers first fabricate the non-graphene-containing structures—the electrodes and gates—on plastic sheets. Separately, they grow large graphene sheets on metal, then peel it off and transfer it to the plastic. Finally, they top the sheet with a waterproof layer. The devices work after being soaked in water, and are flexible enough to be folded.[54]

Trilayer graphene

An electric field can change trilayer graphene's crystal structure, transforming its behavior from metal-like to semiconductor-like. A sharp metal scanning tunneling microscopy tip was able to move the domain border between the upper and lower graphene configurations. One side of the material behaves as a metal, while the other side behaves as a semiconductor. Trilayer graphene can be stacked in either Bernal or rhombohedral configurations, which can exist in a single flake. The two domains are separated by a precise boundary at which the middle layer is strained to accommodate the transition from one stacking pattern to the other.[55]

Silicon transistors function as either p-type or n-type semiconductors, whereas graphene could operate as both. This lowers costs and is more versatile. The technique provides the basis for a field-effect transistor. Scalable manufacturing techiques have yet to be developed.[55]

In trilayer graphene, the two stacking configurations exhibit very different electronic properties. The region between them consists of a localized strain soliton where the carbon atoms of one graphene layer shift by the carbon–carbon bond distance. The free-energy difference between the two stacking configurations scales quadratically with electric field, favoring rhombohedral stacking as the electric field increases.[55]

This ability to control the stacking order opens the way to new devices that combine structural and electrical properties.[55][56]

Graphene-based transistors could be much thinner than modern silicon devices, allowing faster and smaller configurations.[citation needed]

Transparent conducting electrodes

Graphene's high electrical conductivity and high optical transparency make it a candidate for transparent conducting electrodes, required for such applications as touchscreens, liquid crystal displays, organic photovoltaic cells, and organic light-emitting diodes. In particular, graphene's mechanical strength and flexibility are advantageous compared to indium tin oxide, which is brittle. Graphene films may be deposited from solution over large areas.[57][58]

Large-area, continuous, transparent and highly conducting few-layered graphene films were produced by chemical vapor deposition and used as anodes for application in photovoltaic devices. A power conversion efficiency (PCE) up to 1.71% was demonstrated, which is 55.2% of the PCE of a control device based on indium tin oxide.[59]

Organic light-emitting diodes (OLEDs) with graphene anodes have been demonstrated.[60] The electronic and optical performance of graphene-based devices are similar to devices made with indium tin oxide.

A carbon-based device called a light-emitting electrochemical cell (LEC) was demonstrated with chemically-derived graphene as the cathode and the conductive polymer PEDOT as the anode.[61] Unlike its predecessors, this device contains only carbon-based electrodes, with no metal.[citation needed]

In 2014 a prototype graphene-based flexible display was demonstrated.[62]

Frequency multiplier

In 2009, researchers built experimental graphene frequency multipliers that take an incoming signal of a certain frequency and output a signal at a multiple of that frequency.[63]

Optoelectronics

Graphene strongly interacts with photons, with the potential for direct band-gap creation. This is promising for optoelectronic and nanophotonic devices. Light interaction arises due to the Van Hove singularity. Graphene displays different time scales in response to photon interaction, ranging from femtoseconds (ultra-fast) to picoseconds. Potential uses include transparent films, touch screens and light emitters or as a plasmonic device that confines light and alters wavelengths.[64]

Hall effect sensors

Due to extremely high electron mobility, graphene may be used for production of highly sensitive Hall effect sensors.[65] Potential application of such sensors is connected with DC current transformers for special applications.[citation needed]

Quantum dots

Graphene quantum dots (GQDs) keep all dimensions less than 10 nm. Their size and edge crystallography govern their electrical, magnetic, optical and chemical properties. GQDs can be produced via graphite nanotomy[66] or via bottom-up, solution-based routes (Diels-Alder, cyclotrimerization and/or cyclodehydrogenation reactions).[67]
GQDs with controlled structure can be incorporated into applications in electronics, optoelectronics and electromagnetics. Quantum confinement can be created by changing the width of graphene nanoribbons (GNRs) at selected points along the ribbon.[29][68] It is studied as a catalyst for fuel cells.[69]

Organic electronics

A semiconducting polymer (poly(3-hexylthiophene)[70] placed on top of single-layer graphene vertically conducts electric charge better than on a thin layer of silicon. A 50 nm thick polymer film conducted charge about 50 times better than a 10 nm thick film, potentially because the former consists of a mosaic of variably-oriented crystallites forms a continuous pathway of interconnected crystals. In a thin film or on silicon,[70] plate-like crystallites are oriented parallel to the graphene layer. Uses include solar cells.[71]

Light processing

Optical modulator

When the Fermi level of graphene is tuned, its optical absorption can be changed. In 2011, researchers reported the first graphene-based optical modulator. Operating at 1.2 GHz without a temperature controller, this modulator has a broad bandwidth (from 1.3 to 1.6 μm) and small footprint (~25 μm2).[72]

Infrared light detection

Graphene, reacts to the infrared spectrum at room temperature, albeit with sensitivity 100 to 1000 times too low for practical applications. However, two graphene layers separated by an insulator allowed an electric field produced by holes left by photo-freed electrons in one layer to affect a current running through the other layer. The process produces little heat, making it suitable for use in night-vision optics. The sandwich is thin enough to be integrated in handheld devices, eyeglass-mounted computers and even contact lenses.[73]

Energy

Generation

Ethanol distillation

Graphene oxide membranes allow water vapor to pass through, but are impermeable to other liquids and gases.[74] This phenomenon has been used for further distilling of vodka to higher alcohol concentrations, in a room-temperature laboratory, without the application of heat or vacuum as used in traditional distillation methods.[75] Further development and commercialization of such membranes could revolutionize the economics of biofuel production and the alcoholic beverage industry.[citation needed]

Solar cells

Graphene has a unique combination of high electrical conductivity and optical transparency, which make it a candidate for use in solar cells. A single sheet of graphene is a zero-bandgap semiconductor whose charge carriers are delocalized over large areas, implying that carrier scattering does not occur. Because this material only absorbs 2.6% of green light and 2.3% of red light,[76] it is a candidate for applications requiring a transparent conductor.

Graphene can be assembled into a film electrode with low roughness. However, graphene films produced via solution processing contain lattice defects and grain boundaries that act as recombination centers and decrease the material's electrical conductivity. Thus, these films must be made thicker than one atomic layer to obtain useful sheet resistances. This added resistance can be combatted by incorporating conductive filler materials, such as a silica matrix. Reduced graphene film's electrical conductivity can be improved by attaching large aromatic molecules such as pyrene-1-sulfonic acid sodium salt (PyS) and the disodium salt of 3,4,9,10-perylenetetracarboxylic diimide bisbenzenesulfonic acid (PDI). These molecules, under high temperatures, facilitate better π-conjugation of the graphene basal plane. Graphene films have high transparency in the visible and near-infrared regions and are chemically and thermally stable.[77]

For graphene to be used in commercial solar cells, large-scale production is required. However, no scalable process for producing graphene is available, including the peeling of pyrolytic graphene or thermal decomposition of silicon carbide.[77]

Graphene's high charge mobilities recommend it for use as a charge collector and transporter in photovoltaics (PV). Using graphene as a photoactive material requires its bandgap to be 1.4–1.9 eV. In 2010, single cell efficiencies of nanostructured graphene-based PVs of over 12% were achieved. According to P. Mukhopadhyay and R. K. Gupta organic photovoltaics could be "devices in which semiconducting graphene is used as the photoactive material and metallic graphene is used as the conductive electrodes".[77]

In 2010, Xinming Li and Hongwei Zhu from Tsinghua University first reported graphene-silicon heterojunction solar cell, where graphene served as a transparent electrode and introduced a built-in electric field near the interface between the graphene and n-type silicon to help collect photo-generated carriers. More studies promote this new type of photovoltaic device.[78] For example, in 2012 researchers from the University of Florida reported efficiency of 8.6% for a prototype cell consisting of a wafer of silicon coated with a layer of graphene doped with trifluoromethanesulfonyl-amide (TFSA). Furthermore, Xinming Li found chemical doping could improve the graphene characteristics and significantly enhance the efficiency of graphene-silicon solar cell to 9.6% in 2013.[79]
In 2015 researchers reported efficiency of 15.6% by choosing the optimal oxide thickness on the silicon.[80]

In 2013 another team claimed to have reached 15.6% percent using a combination of titanium oxide and graphene as a charge collector and perovskite as a sunlight absorber. The device is manufacturable at temperatures under 150 °C (302 °F) using solution-based deposition. This lowers production costs and offers the potential using flexible plastics.[81]

Large scale production of highly transparent graphene films by chemical vapor deposition was achieved in 2008. In this process, ultra-thin graphene sheets are created by first depositing carbon atoms in the form of graphene films on a nickel plate from methane gas. A protective layer of thermoplastic is laid over the graphene layer and the nickel underneath is dissolved in an acid bath. The final step is to attach the plastic-protected graphene to a flexible polymer sheet, which can then be incorporated into an OPV cell. Graphene/polymer sheets range in size up to 150 square centimeters and can be used to create dense arrays of flexible OPV cells. It may eventually be possible to run printing presses covering extensive areas with inexpensive solar cells, much like newspaper presses print newspapers (roll-to-roll).[82]

Silicon generates only one current-driving electron for each photon it absorbs, while graphene can produce multiple electrons. Solar cells made with graphene could offer 60% conversion efficiency – double the widely accepted maximum efficiency of silicon cells.[83]

Fuel cells

Appropriately perforated graphene (and hexagonal boron nitride hBN) can allow protons to pass through it, offering the potential for using graphene monolayers as a barrier that blocks hydrogen atoms but not protons/ionized hydrogen (hydrogen atoms with their electrons stripped off). They could even be used to extract hydrogen gas out of the atmosphere that could power electric generators with ambient air.[84]

The membranes are more effective at elevated temperatures and when covered with catalytic nanoparticles such as platinum.[84]

Graphene could solve a major problem for fuel cells: fuel crossover that reduces efficiency and durability.[84]

At room temperature, proton conductivity with monolayer hBN, outperforms graphene, with resistivity to proton flow of about 10 Ω cm2 and a low activation energy of about 0.3 electronvolts. At higher temperatures, graphene outperforms with resistivity estimated to fall below 10−3 Ω cm2 above 250 degrees Celsius.[85]

In another project, protons easily pass through slightly imperfect graphene membranes on fused silica in water.[86] The membrane was exposed to cycles of high and low pH. Protons transferred reversibly from the aqueous phase through the graphene to the other side where they undergo acid–base chemistry with silica hydroxyl groups. Computer simulations indicated energy barriers of 0.61–0.75 eV for hydroxyl-terminated atomic defects that participate in a Grotthuss-type relay, while pyrylium-like ether terminations did not.[87]

Storage

Supercapacitor

Due to graphene's high surface area to mass ratio, one potential application is in the conductive plates of supercapacitors.[88]

In February 2013 researchers announced a novel technique to produce graphene supercapacitors based on the DVD burner reduction approach.[89]

In 2014 a supercapacitor was announced that was claimed to achieve energy density comparable to current lithium-ion batteries.[26][27]

In 2015 the technique was adapted to produce stacked, 3-D supercapacitors. Laser-induced graphene was produced on both sides of a polymer sheet. The sections were then stacked, separated by solid electrolytes, making multiple microsupercapacitors. The stacked configuration substantially increased the energy density of the result. In testing, the researchers charged and discharged the devices for thousands of cycles with almost no loss of capacitance.[90]
The resulting devices were mechanically flexible, surviving 8,000 bending cycles. This makes them potentially suitable for rolling in a cylindrical configuration.  Solid-state polymeric electrolyte-based devices exhibit areal capacitance of >9 mF/cm2 at a current density of 0.02 mA/cm2, over twice that of conventional aqueous electrolytes.[91]

Also in 2015 another project announced a microsupercapacitor that is small enough to fit in wearable or implantable devices. Just one-fifth the thickness of a sheet of paper, it is capable of holding more than twice as much charge as a comparable thin-film lithium battery. The design employed laser-scribed graphene, or LSG with manganese dioxide. They can be fabricated without extreme temperatures or expensive “dry rooms”. Their capacity is six times that of commercially available supercapacitors.[92] The device reached volumetric capacitance of over 1,100 F/cm3. This corresponds to a specific capacitance of the constituent MnO2 of 1,145 F/g, close to the theoretical maximum of 1,380 F/g. Energy density varies between 22 and 42 Wh/l depending on device configuration.[93]

Electrode for Li-ion batteries

Stable Li-ion cycling has recently been demonstrated in bi- and few layer graphene films grown on nickel substrates,[94] while single layer graphene films have been demonstrated as a protective layer against corrosion in battery components such as the battery case.[95] This creates possibilities for flexible electrodes for microscale Li-ion batteries where the anode acts as the active material as well as the current collector.[96]
There are also silicon-graphene anode Li-ion batteries.[97]

Hydrogen storage

Hydrogenation-assisted graphene origami (HAGO) was used to cause approximately square graphene sheets to fold into a cage can store hydrogen at 9.5 percent by weight. The U.S. Department of Energy had set a goal of 7.5 percent hydrogen by 2020. An electric field causes the box to open and close.[98]

Rechargeable battery

Researchers at Northwestern University built a lithium-ion battery made of graphene and silicon, which was claimed to last over a week on a single charge and only took 15 minutes to charge.[99]

Sensors

Molecular adsorbtion

Theoretically graphene makes an excellent sensor due to its 2D structure. The fact that its entire volume is exposed to its surrounding environment makes it very efficient to detect adsorbed molecules. However, similar to carbon nanotubes, graphene has no dangling bonds on its surface. Gaseous molecules cannot be readily adsorbed onto graphene surfaces, so intrinsically graphene is insensitive.[100] The sensitivity of graphene chemical gas sensors can be dramatically enhanced by functionalization, for example, coating the film with a thin layer of certain polymers. The thin polymer layer acts like a concentrator that absorbs gaseous molecules. The molecule absorption introduces a local change in electrical resistance of graphene sensors. While this effect occurs in other materials, graphene is superior due to its high electrical conductivity (even when few carriers are present) and low noise, which makes this change in resistance detectable.[101]

Piezoelectric effect

Density functional theory simulations predict that depositing certain adatoms on graphene can render it piezoelectrically responsive to an electric field applied in the out-of-plane direction. This type of locally engineered piezoelectricity is similar in magnitude to that of bulk piezoelectric materials and makes graphene a candidate for control and sensing in nanoscale devices.[102]

Body motion

Rubber bands infused with graphene("G-bands") can be used as inexpensive body sensors. The bands remain pliable and can be used as a sensor to measure breathing, heart rate, or movement. Lightweight sensor suits for vulnerable patients could make it possible to remotely monitor subtle movement. These sensors display 10-fold increases in resistance and work at strains exceeding 800%. Gauge factors of up to 35 were observed. Such sensors can function at vibration frequencies of at least 160 Hz. At 60 Hz, strains of at least 6% at strain rates exceeding 6000%/s can be monitored.[103]

Environmental

Contaminant removal

Graphene oxide is non-toxic and biodegradable. Its surface is covered with epoxy, hydroxyl, and carboxyl groups that interact with cations and anions. It is soluble in water and forms stable colloid suspensions in other liquids because it is amphiphilic (able to mix with water or oil). Dispersed in liquids it shows excellent sorption capacities. It can remove copper, cobalt, cadmium, arsenate and organic solvents.[104]

In 2013 it was shown to be able to remove radioactive nuclides from water, including radioactive isotopes of actinides (elements with atomic numbers 89 to 103, including thorium, uranium, neptunium, plutonium and americium) and lanthanides (the ‘rare earths’ with atomic numbers 57 to 71, including europium).[104]

Even at concentrations < 0.1 g/L, radionuclide sorption proceeds rapidly. At pH between 4 and 8, graphene oxide removes over 90% of nuclides, including uranium and europium.  At pH >7, more than 70% of strontium and technicium are removed with up to 20% of neptunium.[104]

Water filtration

Research suggests that graphene filters could outperform other techniques of desalination by a significant margin.[105]

Other

Plasmonics and metamaterials

Graphene accommodates a plasmonic surface mode, observed recently via near field infrared optical microscopy techniques.[106][107] and infrared spectroscopy [108] Potential applications are in the terahertz to midinfrared frequencies,[109] such as terahertz and midinfrared light modulators, passive terahertz filters, midinfrared photodetectors and biosensors.

Lubricant

Scientists discovered using graphene as a lubricant works better than traditionally used graphite. A one atom thick layer of graphene in between a steel ball and steel disc lasted for 6,500 cycles. Conventional lubricants lasted 1,000 cycles.[110]

Radio wave absorption

Stacked graphene layers on a quartz substrate increased the absorption of millimeter (radio) waves by 90 per cent over 125 – 165 GHz bandwidth, extensible to microwave and low-terahertz frequencies, while remaining transparent to visible light. For example, graphene could be used as a coating for buildings or windows to block radio waves. Absorption is a result of mutually coupled Fabry–Perot resonators represented by each graphene-quartz substrate. A repeated transfer-and-etch process was used to control surface resistivity.[111][112]

Redox

Graphene oxide can be reversibly reduced and oxidized using electrical stimulus. Controlled reduction and oxidation in two-terminal devices containing multilayer graphene oxide films are shown to result in switching between partially reduced graphene oxide and graphene, a process that modifies electronic and optical properties. Oxidation and reduction are related to resistive switching.[113]

Nanoantennas

A graphene-based plasmonic nano-antenna (GPN) can operate efficiently at millimeter radio wavelengths. The wavelength of surface plasmon polaritons for a given frequency is several hundred times smaller than the wavelength of freely propagating electromagnetic waves of the same frequency. These speed and size differences enable efficient graphene-based antennas to be far smaller than conventional alternatives. The latter operate at frequencies 100-1000 times larger than GPNs, producing .01-.001 as many photons.[114]

An electromagnetic (EM) wave directed vertically onto a graphene surface excites the graphene into oscillations that interact with those in the dielectric on which the graphene is mounted, thereby forming surface plasmon polaritons (SPP). When the antenna becomes resonant (an integral number of SPP wavelengths fit into the physical dimensions of the graphene), the SPP/EM coupling increases greatly, efficiently transferring energy between the two.[114]

A phased array antenna 100 µm in diameter could produce 300 GHz beams only a few degrees in diameter, instead of the 180 degree radiation from tsa conventional metal antenna of that size. Potential uses include smart dust, low-power terabit wireless networks[114] and photonics.[115]

A nanoscale gold rod antenna captured and transformed EM energy into graphene plasmons, analogous to a radio antenna converting radio waves into electromagnetic waves in a metal cable. The plasmon wavefronts can be directly controlled by adjusting antenna geometry. The waves were focused (by curving the antenna) and refracted (by a prism-shaped graphene bilayer because the conductivity in the two-atom-thick prism is larger than in the surrounding one-atom-thick layer.)[115]

Sound transducers

Graphene provides relatively good frequency response, suggesting uses in audio speakers. Its light weight may make it suitable for microphones as well.[116]

Waterproof coating

Graphene could potentially usher in a new generation of waterproof devices whose chassis may not need to be sealed like today's devices.[99][dubious ]

Coolant additive

Graphene's high thermal conductivity suggests that it could be used as an additive in coolants. Preliminary research work showed that 5% graphene by volume can enhance the thermal conductivity of a base fluid by 86%.[117]
Another application due to graphene's enhanced thermal conductivity was found in PCR.[14]

Reference material

Graphene's properties suggest it as a reference material for characterizing electroconductive and transparent materials. One layer of graphene absorbs 2.3% of red light.[118]

This property was used to define the conductivity of transparency that combines sheet resistance and transparency.
This parameter was used to compare materials without the use of two independent parameters.[119]

Thermal management

In 2011, researchers reported that a three-dimensional, vertically aligned, functionalized multilayer graphene architecture can be an approach for graphene-based thermal interfacial materials (TIMs) with superior thermal conductivity and ultra-low interfacial thermal resistance between graphene and metal.[120]

Graphene-metal composites can be utilized in thermal interface materials.[121]

Adding a layer of graphene to each side of a copper film increased the metal's heat-conducting properties up to 24%. This suggests the possibility of using them for semiconductor interconnects in computer chips. The improvement is the result of changes in copper’s nano- and microstructure, not from graphene’s independent action as an additional heat conducting channel. High temperature chemical vapor deposition stimulates grain size growth in copper films. The larger grain sizes improve heat conduction. The heat conduction improvement was more pronounced in thinner copper films, which is useful as copper interconnects shrink.[122]

Structural material

Graphene's strength, stiffness and lightness suggested it for use with carbon fiber. Graphene has been used as a reinforcing agent to improve the mechanical properties of biodegradable polymeric nanocomposites for engineering bone tissue.[123]

Catalyst

In 2014, researchers at The University of Western Australia discovered nano sized fragments of graphene can speed up the rate of chemical reactions.[124]

Graphene device makes ultrafast light to energy conversion possible

By

Original link:  http://www.gizmag.com/graphene-ultrafast-light-energy-conversion-photodetector-semiconductor/37005/

Using layers of graphene, scientists claim to have created a photodetector that converts l...
Using layers of graphene, scientists claim to have created a
photodetector that converts light to energy in less than 50 quadrillionths
of a second (Image: ICFO/Achim Woessner)

Converting light to electricity is one of the pillars of modern electronics, with the process essential for the operation of everything from solar cells and TV remote control receivers through to laser communications and astronomical telescopes. These devices rely on the swift and effective operation of this technology, especially in scientific equipment, to ensure the most efficient conversion rates possible. In this vein, researchers from the Institute of Photonic Sciences (Institut de Ciències Fotòniques/ICFO) in Barcelona have demonstrated a graphene-based photodetector they claim converts light into electricity in less than 50 quadrillionths of a second.

Graphene has already been identified as a superior substance for the transformation of photons to electrical current, even in the infrared part of the spectrum. However, prior to the ICFO research, it was unclear exactly how fast graphene would react when subjected to ultra-rapid bursts of light energy.

To test the speed of conversion, the ICFO team – in collaboration with scientists from MIT and the University of California, Riverside – utilized an arrangement consisting of graphene film layers set up as a p-n (positive-negative) junction semiconductor, a sub-50 femtosecond, titanium-sapphire, pulse-shaped laser to provide the ultrafast flashes of light, along with an ultra-sensitive pulse detector to capture the speed of conversion to electrical energy.

When this arrangement was fired up and tested, the scientists realized that the photovoltage generation time was occurring at a rate of better than 50 femtoseconds (or 50 quadrillionths of a second).

According to the researchers, this blistering speed of conversion is due to the structure of graphene which allows the exceptionally rapid and effective interaction between all of the conduction band carriers contained within it. In other words, the excitation of the molecules of graphene by the laser pulses causes the electrons in the material to heat up, and stay hot, while the carbon lattice underlying the structure remains cool. And, as the electrons in the laser-excited graphene do not cool down rapidly because they do not easily recouple with the graphene lattice, they remain in that state and transfer their energy much more rapidly.

As such, constant laser-pulse excitation of an area of graphene quickly results in superfast electron distribution within the material at constantly elevated electron temperatures. This rapid conversion to electron heat is then converted into a voltage at the p-n junction of two graphene regions.

Significantly, this "hot-carrier" generation is quite different from the operation of standard semiconductor devices. This is because their operation is dependent upon overcoming of the binding electron energy inherent in the material for an incoming photon to dislodge an electron and create an electrical current. In the ICFO device, the continued excitation of electrons above this band-gap level results in the much faster and easier movement of them when subjected to incoming photons to create an electric current.

Though it is early days in the study of such devices, the practical upshot of this research may be in the eventual production of novel types of ultrafast and extremely effective photodetectors and energy-harvesting devices. And, given that the basic operating principles of hot-carrier graphene devices are substantially different from traditional silicon or germanium semiconductors, an entirely new stream of electronic components that take advantage of this phenomenon may evolve.

The findings of this work have recently been published in the journal Nature Nanotechnology.

Source: ICFO

Cryogenics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cryogenics...