Search This Blog

Wednesday, October 5, 2022

Solar activity and climate

From Wikipedia, the free encyclopedia
 
The graph shows the solar irradiance without a long-term trend. The 11 year solar cycle is also visible. The temperature, in contrast, shows an upward trend.
Solar irradiance (yellow) plotted with temperature (red) since 1880.

Patterns of solar irradiance and solar variation have been a main driver of climate change over the millions to billions of years of the geologic time scale, but their role in recent warming is insignificant. Evidence that this is the case comes from analysis on many timescales and from many sources, including: direct observations; composites from baskets of different proxy observations; and numerical climate models. On millennial timescales, paleoclimate indicators have been compared to cosmogenic isotope abundances as the latter are a proxy for solar activity. These have also been used on century times scales but, in addition, instrumental data are increasingly available (mainly telescopic observations of sunspots and thermometer measurements of air temperature) and show that, for example, the temperature fluctuations do not match the solar activity variations and that the commonly-invoked association of the Little Ice Age with the Maunder minimum is far too simplistic as, although solar variations may have played a minor role, a much bigger factor is known to be Little Ice Age volcanism. In recent decades observations of unprecedented accuracy, sensitivity and scope (of both solar activity and terrestrial climate) have become available from spacecraft and show unequivocally that recent global warming is not caused by changes in the Sun.

Geologic time

Earth formed around 4.54 billion years ago by accretion from the solar nebula. Volcanic outgassing probably created the primordial atmosphere, which contained almost no oxygen and would have been toxic to humans and most modern life. Much of the Earth was molten because of frequent collisions with other bodies which led to extreme volcanism. Over time, the planet cooled and formed a solid crust, eventually allowing liquid water to exist on the surface.

Three to four billion years ago the Sun emitted only 70% of its current power. Under the present atmospheric composition, this past solar luminosity would have been insufficient to prevent water from uniformly freezing. There is nonetheless evidence that liquid water was already present in the Hadean and Archean eons, leading to what is known as the faint young Sun paradox. Hypothesized solutions to this paradox include a vastly different atmosphere, with much higher concentrations of greenhouse gases than currently exist.

Over the following approximately 4 billion years, the Sun's energy output increased and the composition of the Earth atmosphere changed. The Great Oxygenation Event around 2.4 billion years ago was the most notable alteration of the atmosphere. Over the next five billion years, the Sun's ultimate death as it becomes a very bright red giant and then a very faint white dwarf will have dramatic effects on climate, with the red giant phase likely already ending any life on Earth.

Measurement

Since 1978, solar irradiance has been directly measured by satellites with very good accuracy. These measurements indicate that the Sun's total solar irradiance fluctuates by +-0.1% over the ~11 years of the solar cycle, but that its average value has been stable since the measurements started in 1978. Solar irradiance before the 1970s is estimated using proxy variables, such as tree rings, the number of sunspots, and the abundances of cosmogenic isotopes such as 10Be, all of which are calibrated to the post-1978 direct measurements.

Modelled simulation of the effect of various factors (including GHGs, Solar irradiance) singly and in combination, showing in particular that solar activity produces a small and nearly uniform warming, unlike what is observed.

Solar activity has been on a declining trend since the 1960s, as indicated by solar cycles 19-24, in which the maximum number of sunspots were 201, 111, 165, 159, 121 and 82, respectively. In the three decades following 1978, the combination of solar and volcanic activity is estimated to have had a slight cooling influence. A 2010 study found that the composition of solar radiation might have changed slightly, with in an increase of ultraviolet radiation and a decrease in other wavelengths."

Modern era

In the modern era, the Sun has operated within a sufficiently narrow band that climate has been little affected. Models indicate that the combination of solar variations and volcanic activity can explain periods of relative warmth and cold between A.D. 1000 and 1900.

The Holocene

Numerous paleoenvironmental reconstructions have looked for relationships between solar variability and climate. Arctic paleoclimate, in particular, has linked total solar irradiance variations and climate variability. A 2001 paper identified a ~1500 year solar cycle that was a significant influence on North Atlantic climate throughout the Holocene.

Little Ice Age

One historical long-term correlation between solar activity and climate change is the 1645–1715 Maunder minimum, a period of little or no sunspot activity which partially overlapped the "Little Ice Age" during which cold weather prevailed in Europe. The Little Ice Age encompassed roughly the 16th to the 19th centuries. Whether the low solar activity or other factors caused the cooling is debated.

The Spörer Minimum between 1460 and 1550 was matched to a significant cooling period.

A 2012 paper instead linked the Little Ice Age to volcanism, through an "unusual 50-year-long episode with four large sulfur-rich explosive eruptions," and claimed "large changes in solar irradiance are not required" to explain the phenomenon.

A 2010 paper suggested that a new 90-year period of low solar activity would reduce global average temperatures by about 0.3 °C, which would be far from enough to offset the increased forcing from greenhouse gases.

Fossil fuel era

1979–2009: Over the past 3 decades, terrestrial temperature has not correlated with sunspot trends. The top plot is of sunspots, while below is the global atmospheric temperature trend. El Chichón and Pinatubo were volcanoes, while El Niño is part of ocean variability. The effect of greenhouse gas emissions is on top of those fluctuations.
 
Multiple factors have affected terrestrial climate change, including natural climate variability and human influences such as greenhouse gas emissions and land use change on top of any effects of solar variability.

The link between recent solar activity and climate has been quantified and is not a major driver of the warming that has occurred since early in the twentieth century. Human-induced forcings are needed to reproduce the late-20th century warming. Some studies associate solar cycle-driven irradiation increases with part of twentieth century warming.

Three mechanisms are proposed by which solar activity affects climate:

  • Solar irradiance changes directly affecting the climate ("radiative forcing"). This is generally considered to be a minor effect, as the measured amplitudes of the variations are too small to have significant effect, absent some amplification process.
  • Variations in the ultraviolet component. The UV component varies by more than the total, so if UV were for some (as yet unknown) reason to have a disproportionate effect, this might explain a larger solar signal.
  • Effects mediated by changes in galactic cosmic rays (which are affected by the solar wind) such as changes in cloud cover.

Climate models have been unable to reproduce the rapid warming observed in recent decades when they only consider variations in total solar irradiance and volcanic activity. Hegerl et al. (2007) concluded that greenhouse gas forcing had "very likely" caused most of the observed global warming since the mid-20th century. In making this conclusion, they allowed for the possibility that climate models had been underestimating the effect of solar forcing.

Another line of evidence comes from looking at how temperatures at different levels in the Earth's atmosphere have changed. Models and observations show that greenhouse gas results in warming of the troposphere, but cooling of the stratosphere. Depletion of the ozone layer by chemical refrigerants stimulated a stratospheric cooling effect. If the Sun was responsible for observed warming, warming of the troposphere at the surface and warming at the top of the stratosphere would be expected as the increased solar activity would replenish ozone and oxides of nitrogen.

Lines of evidence

The assessment of the solar activity/climate relationship involves multiple, independent lines of evidence.

Sunspots

CO2, temperature, and sunspot activity since 1850

Early research attempted to find a correlation between weather and sunspot activity, mostly without notable success. Later research has concentrated more on correlating solar activity with global temperature.

Irradiation

Solar forcing 1850–2050 used in a NASA GISS climate model. Recent variation pattern used after 2000.

Accurate measurement of solar forcing is crucial to understanding possible solar impact on terrestrial climate. Accurate measurements only became available during the satellite era, starting in the late 1970s, and even that is open to some residual disputes: different teams find different values, due to different methods of cross-calibrating measurements taken by instruments with different spectral sensitivity. Scafetta and Willson argue for significant variations of solar luminosity between 1980 and 2000, but Lockwood and Frohlich find that solar forcing declined after 1987.

The 2001 Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report (TAR) concluded that the measured impact of recent solar variation is much smaller than the amplification effect due to greenhouse gases, but acknowledged that scientific understanding is poor with respect to solar variation.

Estimates of long-term solar irradiance changes have decreased since the TAR. However, empirical results of detectable tropospheric changes have strengthened the evidence for solar forcing of climate change. The most likely mechanism is considered to be some combination of direct forcing by TSI changes and indirect effects of ultraviolet (UV) radiation on the stratosphere. Least certain are indirect effects induced by galactic cosmic rays.

In 2002, Lean et al. stated that while "There is ... growing empirical evidence for the Sun's role in climate change on multiple time scales including the 11-year cycle", "changes in terrestrial proxies of solar activity (such as the 14C and 10Be cosmogenic isotopes and the aa geomagnetic index) can occur in the absence of long-term (i.e., secular) solar irradiance changes ... because the stochastic response increases with the cycle amplitude, not because there is an actual secular irradiance change." They conclude that because of this, "long-term climate change may appear to track the amplitude of the solar activity cycles," but that "Solar radiative forcing of climate is reduced by a factor of 5 when the background component is omitted from historical reconstructions of total solar irradiance ...This suggests that general circulation model (GCM) simulations of twentieth century warming may overestimate the role of solar irradiance variability." A 2006 review suggested that solar brightness had relatively little effect on global climate, with little likelihood of significant shifts in solar output over long periods of time. Lockwood and Fröhlich, 2007, found "considerable evidence for solar influence on the Earth's pre-industrial climate and the Sun may well have been a factor in post-industrial climate change in the first half of the last century", but that "over the past 20 years, all the trends in the Sun that could have had an influence on the Earth's climate have been in the opposite direction to that required to explain the observed rise in global mean temperatures." In a study that considered geomagnetic activity as a measure of known solar-terrestrial interaction, Love et al. found a statistically significant correlation between sunspots and geomagnetic activity, but not between global surface temperature and either sunspot number or geomagnetic activity.

Benestad and Schmidt concluded that "the most likely contribution from solar forcing a global warming is 7 ± 1% for the 20th century and is negligible for warming since 1980." This paper disagreed with Scafetta and West, who claimed that solar variability has a significant effect on climate forcing. Based on correlations between specific climate and solar forcing reconstructions, they argued that a "realistic climate scenario is the one described by a large preindustrial secular variability (e.g., the paleoclimate temperature reconstruction by Moberg et al.) with TSI experiencing low secular variability (as the one shown by Wang et al.). Under this scenario, they claimed the Sun might have contributed 50% of the observed global warming since 1900. Stott et al. estimated that the residual effects of the prolonged high solar activity during the last 30 years account for between 16% and 36% of warming from 1950 to 1999.

Direct measurement and time series

Update on the 2007 solar change and climate paper by Lockwood and Fröhlich, extending the data to the present day. Panels from top to bottom show: global mean air surface temperature anomaly from the HadCRUT4 dataset; the mixing ratio of Carbon dioxide in Earth's atmosphere from observations (blue dots) and ice cores (mauve line); the international sunspot number , smoothed using averaging intervals between 8 years and 14 years (the black line connects the yellow points where the mean is independent of and so shows the solar activity trend without making an assumption about the solar cycle length); the total solar irradiance () the blue dots are the PMOD composite of observations  and the black and mauve lines are annual and 11-year means of the SATIRE-T2 model of the effect of sunspots and faculae with the addition of a quiet-Sun variation derived from cosmic ray fluxes and cosmogenic isotopes; the open solar flux from (mauve line) geomagnetic observations and (blue dots) spacecraft data; Oulu neutron monitor cosmic ray counts, , (blue dots) observed and (mauve line) extrapolated using cosmogenic isotope data; and (grey) monthly and (mauve) annual international sunspot numbers, . The green and yellow shaded bands show sunspot cycles 14 and 24

Neither direct measurements nor proxies of solar variation correlate well with Earth global temperature, particularly in recent decades when both quantities are best known. 

The oppositely-directed trends highlighted by Lockwood and Fröhlich in 2007, with global mean temperatures continuing to rise while solar activity fell, have continued and become even more pronounced since then. In 2007 the difference in the trends was apparent after about 1987 and that difference has grown and accelerated in subsequent years. The updated figure (right) shows the variations and contrasts solar cycles 14 and 24, a century apart, that are quite similar in all solar activity measures (in fact cycle 24 is slightly less active than cycle 14 on average), yet the global mean air surface temperature is more than 1 degree Celsius higher for cycle 24 than cycle 14, showing the rise is not associated with solar activity. The total solar irradiance (TSI) panel shows the PMOD composite of observations with a modelled variation from the SATIRE-T2 model of the effect of sunspots and faculae with the addition of a quiet -Sun variation (due to sub-resolution photospheric features and any solar radius changes) derived from correlations with comic ray fluxes and cosmogenic isotopes. The finding that solar activity was approximately the same in cycles 14 and 24 applies to all solar outputs that have, in the past, been proposed as a potential cause of terrestrial climate change and includes total solar irradiance, cosmic ray fluxes, spectral UV irradiance, solar wind speed and/or density, heliospheric magnetic field and its distribution of orientations and the consequent level of geomagnetic activity.

Daytime/nighttime

Global average diurnal temperature range has decreased. Daytime temperatures have not risen as fast as nighttime temperatures. This is the opposite of the expected warming if solar energy (falling primarily or wholly during daylight, depending on energy regime) were the principal means of forcing. It is, however, the expected pattern if greenhouse gases were preventing radiative escape, which is more prevalent at night.

Hemisphere and latitude

The Northern Hemisphere is warming faster than the Southern Hemisphere. This is the opposite of the expected pattern if the Sun, currently closer to the Earth during austral summer, were the principal climate forcing. In particular, the Southern Hemisphere, with more ocean area and less land area, has a lower albedo ("whiteness") and absorbs more light. The Northern Hemisphere, however, has higher population, industry and emissions.

Furthermore, the Arctic region is warming faster than the Antarctic and faster than northern mid-latitudes and subtropics, despite polar regions receiving less sun than lower latitudes.

Altitude

Solar forcing should warm Earth's atmosphere roughly evenly by altitude, with some variation by wavelength/energy regime. However, the atmosphere is warming at lower altitudes while cooling higher up. This is the expected pattern if greenhouse gases drive temperature, as on Venus.

Solar variation theory

A 1994 study of the US National Research Council concluded that TSI variations were the most likely cause of significant climate change in the pre-industrial era, before significant human-generated carbon dioxide entered the atmosphere.

Scafetta and West correlated solar proxy data and lower tropospheric temperature for the preindustrial era, before significant anthropogenic greenhouse forcing, suggesting that TSI variations may have contributed 50% of the warming observed between 1900 and 2000 (although they conclude "our estimates about the solar effect on climate might be overestimated and should be considered as an upper limit.") If interpreted as a detection rather than an upper limit, this would contrast with global climate models predicting that solar forcing of climate through direct radiative forcing makes an insignificant contribution.

Sunspot and temperature reconstructions from proxy data

In 2000, Stott and others reported on the most comprehensive model simulations of 20th century climate to that date. Their study looked at both "natural forcing agents" (solar variations and volcanic emissions) as well as "anthropogenic forcing" (greenhouse gases and sulphate aerosols). They found that "solar effects may have contributed significantly to the warming in the first half of the century although this result is dependent on the reconstruction of total solar irradiance that is used. In the latter half of the century, we find that anthropogenic increases in greenhouses gases are largely responsible for the observed warming, balanced by some cooling due to anthropogenic sulphate aerosols, with no evidence for significant solar effects." Stott's group found that combining these factors enabled them to closely simulate global temperature changes throughout the 20th century. They predicted that continued greenhouse gas emissions would cause additional future temperature increases "at a rate similar to that observed in recent decades". In addition, the study notes "uncertainties in historical forcing" — in other words, past natural forcing may still be having a delayed warming effect, most likely due to the oceans.

Stott's 2003 work largely revised his assessment, and found a significant solar contribution to recent warming, although still smaller (between 16 and 36%) than that of greenhouse gases.

A study in 2004 concluded that solar activity affects the climate - based on sunspot activity, yet plays only a small role in the current global warming.

Correlations to solar cycle length

In 1991, Friis-Christensen and Lassen claimed a strong correlation of the length of the solar cycle with northern hemispheric temperature changes. They initially used sunspot and temperature measurements from 1861 to 1989 and later extended the period using four centuries of climate records. Their reported relationship appeared to account for nearly 80 per cent of measured temperature changes over this period. The mechanism behind these claimed correlations was a matter of speculation.

In a 2003 paper Laut identified problems with some of these correlation analyses. Damon and Laut claimed:

the apparent strong correlations displayed on these graphs have been obtained by incorrect handling of the physical data. The graphs are still widely referred to in the literature, and their misleading character has not yet been generally recognized.

Damon and Laut stated that when the graphs are corrected for filtering errors, the sensational agreement with the recent global warming, which drew worldwide attention, totally disappeared.

In 2000, Lassen and Thejll updated their 1991 research and concluded that while the solar cycle accounted for about half the temperature rise since 1900, it failed to explain a rise of 0.4 °C since 1980. Benestad's 2005 review found that the solar cycle did not follow Earth's global mean surface temperature.

Weather

Solar activity may also impact regional climates, such as for the rivers Paraná and Po. Measurements from NASA's Solar Radiation and Climate Experiment show that solar UV output is more variable than total solar irradiance. Climate modelling suggests that low solar activity may result in, for example, colder winters in the US and northern Europe and milder winters in Canada and southern Europe, with little change in global averages. More broadly, links have been suggested between solar cycles, global climate and regional events such as El Niño. Hancock and Yarger found "statistically significant relationships between the double [~21-year] sunspot cycle and the 'January thaw' phenomenon along the East Coast and between the double sunspot cycle and 'drought' (June temperature and precipitation) in the Midwest."

Cloud condensation

Recent research at CERN's CLOUD facility examined links between cosmic rays and cloud condensation nuclei, demonstrating the effect of high-energy particulate radiation in nucleating aerosol particles that are precursors to cloud condensation nuclei. Kirkby (CLOUD team leader) said, "At the moment, it [the experiment] actually says nothing about a possible cosmic-ray effect on clouds and climate." After further investigation, the team concluded that "variations in cosmic ray intensity do not appreciably affect climate through nucleation."

1983–1994 global low cloud formation data from the International Satellite Cloud Climatology Project (ISCCP) was highly correlated with galactic cosmic ray (GCR) flux; subsequent to this period, the correlation broke down. Changes of 3–4% in cloudiness and concurrent changes in cloud top temperatures correlated to the 11 and 22-year solar (sunspot) cycles, with increased GCR levels during "antiparallel" cycles. Global average cloud cover change was measured at 1.5–2%. Several GCR and cloud cover studies found positive correlation at latitudes greater than 50° and negative correlation at lower latitudes. However, not all scientists accept this correlation as statistically significant, and some who do attribute it to other solar variability (e.g. UV or total irradiance variations) rather than directly to GCR changes. Difficulties in interpreting such correlations include the fact that many aspects of solar variability change at similar times, and some climate systems have delayed responses.

Historical perspective

Physicist and historian Spencer R. Weart in The Discovery of Global Warming (2003) wrote:

The study of [sun spot] cycles was generally popular through the first half of the century. Governments had collected a lot of weather data to play with and inevitably people found correlations between sun spot cycles and select weather patterns. If rainfall in England didn't fit the cycle, maybe storminess in New England would. Respected scientists and enthusiastic amateurs insisted they had found patterns reliable enough to make predictions. Sooner or later though every prediction failed. An example was a highly credible forecast of a dry spell in Africa during the sunspot minimum of the early 1930s. When the period turned out to be wet, a meteorologist later recalled "the subject of sunspots and weather relationships fell into dispute, especially among British meteorologists who witnessed the discomfiture of some of their most respected superiors." Even in the 1960s he said, "For a young [climate] researcher to entertain any statement of sun-weather relationships was to brand oneself a crank."

 

Covalent organic framework

From Wikipedia, the free encyclopedia

Covalent organic frameworks (COFs) are a class of materials that form two- or three- dimensional structures through reactions between organic precursors resulting in strong, covalent bonds to afford porous, stable, and crystalline materials. COFs emerged as a field from the overarching domain of organic materials as researchers optimized both synthetic control and precursor selection. These improvements to coordination chemistry enabled non-porous and amorphous organic materials such as organic polymers to advance into the construction of porous, crystalline materials with rigid structures that granted exceptional material stability in a wide range of solvents and conditions. Through the development of reticular chemistry, precise synthetic control was achieved and resulted in ordered, nano-porous structures with highly preferential structural orientation and properties which could be synergistically enhanced and amplified. With judicious selection of COF secondary building units (SBUs), or precursors, the final structure could be predetermined, and modified with exceptional control enabling fine-tuning of emergent properties. This level of control facilitates the COF material to be designed, synthesized, and utilized in various applications, many times with metrics on scale or surpassing that of the current state-of-the-art approaches.

History

While at University of Michigan, Omar M. Yaghi (currently at UCBerkeley) and Adrien P Cote published the first paper of COFs in 2005, reporting a series of 2D COFs. They reported the design and successful synthesis of COFs by condensation reactions of phenyl diboronic acid (C6H4[B(OH)2]2) and hexahydroxytriphenylene (C18H6(OH)6). Powder X-ray diffraction studies of the highly crystalline products having empirical formulas (C3H2BO)6·(C9H12)1 (COF-1) and C9H4BO2 (COF-5) revealed 2-dimensional expanded porous graphitic layers that have either staggered conformation (COF-1) or eclipsed conformation (COF-5). Their crystal structures are entirely held by strong bonds between B, C, and O atoms to form rigid porous architectures with pore sizes ranging from 7 to 27 Angstroms. COF-1 and COF-5 exhibit high thermal stability (to temperatures up to 500 to 600 C), permanent porosity, and high surface areas (711 and 1590 square meters per gram, respectively).

The synthesis of 3D COFs has been hindered by longstanding practical and conceptual challenges until it was first achieved in 2007 by Omar M. Yaghi and colleagues. Unlike 0D and 1D systems, which are soluble, the insolubility of 2D and 3D structures precludes the use of stepwise synthesis, making their isolation in crystalline form very difficult. This first challenge, however, was overcome by judiciously choosing building blocks and using reversible condensation reactions to crystallize COFs.

Structure

Porous crystalline solids consists of secondary building units (SBUs) which assemble to form a periodic and porous framework. An almost infinite numbers of frameworks can be formed through various SBU combinations leading to unique material properties for applications in separations, storage, and heterogeneous catalysis.

Types of porous crystalline solids include zeolites, metal-organic frameworks (MOFs), and covalent organic frameworks (COFs). Zeolites are microporous, aluminosilicate minerals commonly used as commercial adsorbents. MOFs are a class of porous polymeric material, consisting of metal ions linked together by organic bridging ligands and are a new development on the interface between molecular coordination chemistry and materials science.

COFs are another class of porous polymeric materials, consisting of porous, crystalline, covalent bonds that usually have rigid structures, exceptional thermal stabilities (to temperatures up to 600 °C), are stable in water and low densities. They exhibit permanent porosity with specific surface areas surpassing those of well-known zeolites and porous silicates.

Secondary Building Units

Schematic Figure of Reticular Chemistry.

The term ‘secondary building unit’ has been used for some time to describe conceptual fragments which can be compared as bricks used to build a house of zeolites; in the context of this page it refers to the geometry of the units defined by the points of extension.

Reticular Synthesis

Reticular synthesis enables facile bottom-up synthesis of the framework materials to introduce precise perturbations in chemical composition, resulting in the highly controlled tunability of framework properties. Through a bottom-up approach, a material is built from atomic or molecular components synthetically as opposed to a top-down approach, which forms a material from the bulk through approaches such as exfoliation, lithography, or other varieties of post-synthetic modification. The bottom-up approach is especially advantageous with respect to materials such as COFs because the synthetic methods are designed to directly result in an extended, highly crosslinked framework that can be tuned with exceptional control at the nanoscale level. Geometrical and dimensional principles govern the framework's resulting topology as the SBUs combine to form predetermined structures. This level of synthetic control has also been termed "molecular engineering", abiding by the concept termed by Arthur R. von Hippel in 1956.

COF topological control through judicious selection of precursors that result in bonding directionality in the final resulting network. Adapted from Jiang and coworkers' Two- and Three-dimensional Covalent Organic Frameworks (COFs).

It has been established in the literature that, when integrated into an isoreticular framework, such as a COF, properties from monomeric compounds can be synergistically enhanced and amplified. COF materials possess the unique ability for bottom-up reticular synthesis to afford robust, tunable frameworks that synergistically enhance the properties of the precursors, which, in turn, offers many advantages in terms of improved performance in different applications. As a result, the COF material is highly modular and tuned efficiently by varying the SBUs’ identity, length, and functionality depending on the desired property change on the framework scale. Ergo, there exists the ability to introduce diverse functionality directly into the framework scaffold to allow for a variety of functions which would be cumbersome, if not impossible, to achieve through a top-down method. such as lithographic approaches or chemical-based nanofabrication. Through reticular synthesis, it is possible to molecularly engineer modular, framework materials with highly porous scaffolds that exhibit unique electronic, optical, and magnetic properties while simultaneously integrating desired functionality into the COF skeleton.

Reticular synthesis is different from retrosynthesis of organic compounds, because the structural integrity and rigidity of the building blocks in reticular synthesis remain unaltered throughout the construction process—an important aspect that could help to fully realize the benefits of design in crystalline solid-state frameworks. Similarly, reticular synthesis should be distinguished from supramolecular assembly, because in the former, building blocks are linked by strong bonds throughout the crystal.

Synthetic Chemistry

Reversible reactions for COF formation featuring boron to form a variety of linkages (boronate, boroxine, and borazine).

Reticular synthesis was used by Yaghi and coworkers in 2005 to construct the first two COFs reported in the literature: COF-1, using a dehydration reaction of benzenediboronic acid (BDBA), and COF-5, via a condensation reaction between hexahydroxytriphenylene (HHTP) and BDBA. These framework scaffolds were interconnected through the formation of boroxine and boronate linkages, respectively, using solvothermal synthetic methods.

COF Linkages

Since Yaghi and coworkers’ seminal work in 2005, COF synthesis has expanded to include a wide range of organic connectivity such as boron-, nitrogen-, other atom-containing linkages. The linkages in the figures shown are not comprehensive as other COF linkages exist in the literature, especially for the formation of 3D COFs.

Skeletal structure of COF-1 consisting of phenyl rings joined by boroxine rings, synthesized by a condensation reaction of phenyldiboronic acid.

Boron condensation

The most popular COF synthesis route is a boron condensation reaction which is a molecular dehydration reaction between boronic acids. In case of COF-1, three boronic acid molecules converge to form a planar six-membered B3O3 (boroxine) ring with the elimination of three water molecules.

Reversible reactions for COF formation featuring nitrogen to form a variety of linkages (imine, hydrazone, azine, squaraine, phenazine, imide, triazine).

Triazine based trimerization

Formation of CTF-1 COF featuring triazine linkages.

Another class of high performance polymer frameworks with regular porosity and high surface area is based on triazine materials which can be achieved by dynamic trimerization reaction of simple, cheap, and abundant aromatic nitriles in ionothermal conditions (molten zinc chloride at high temperature (400 °C)). CTF-1 is a good example of this chemistry.

Imine condensation

A structural representation of the TpOMe-DAQ COF
 
Reversible reactions for COF formation featuring a variety of atoms to form different linkages (a double stage connecting boronate ester and imine linkages, alkene, silicate, nitroso).

The imine condensation reaction which eliminates water (exemplified by reacting aniline with benzaldehyde using an acid catalyst) can be used as a synthetic route to reach a new class of COFs. The 3D COF called COF-300 and the 2D COF named TpOMe-DAQ are good examples of this chemistry. When 1,3,5-triformylphloroglucinol (TFP) is used as one of the SBUs, two complementary tautomerizations occur (an enol to keto and an imine to enamine) which result in a β-ketoenamine moiety as depicted in the DAAQ-TFP framework. Both DAAQ-TFP and TpOMe-DAQ COFs are stable in acidic aqueous conditions and contain the redox active linker 2,6-diaminoanthroquinone which enables these materials to reversibly store and release electrons within a characteristic potential window. Consequently, both of these COFs have been investigated as electrode materials for potential use in supercapacitors.

A structural representation of the DAAQ-TFP COF

Solvothermal Synthesis

The solvothermal approach is the most common used in the literature but typically requires long reaction times due to the insolubility of the organic SBUs in nonorganic media and the time necessary to reach thermodynamic COF products.

Templated Synthesis

Morphological control on the nanoscale is still limited as COFs lack synthetic control in higher dimensions due to the lack of dynamic chemistry during synthesis. To date, researchers have attempted to establish better control through different synthetic methods such as solvothermal synthesis, interface-assisted synthesis, solid templation as well as seeded growth. First one of the precursors is deposited onto the solid support followed by the introduction of the second precursor in vapor form. This results in the deposition of the COF as a thin film on the solid support.

Properties

Porosity

A defining advantage of COFs is the exceptional porosity that results from the substitution of analogous SBUs of varying sizes. Pore sizes range from 7-23 Å and feature a diverse range of shapes and dimensionalities that remain stable during the evacuation of solvent. The rigid scaffold of the COF structure enables the material to be evacuated of solvent and retain its structure, resulting in high surface areas as seen by the Brunauer–Emmett–Teller analysis. This high surface area to volume ratio and incredible stability enables the COF structure to serve as exceptional materials for gas storage and separation.

Crystallinity

There are several COF single crystals synthesized to date. There are a variety of techniques employed to improve crystallinity of COFs. The use of modulators, monofunctional version of precursors, serve to slow the COF formation to allow for more favorable balance between kinetic and thermodynamic control, hereby enabling crystalline growth. This was employed by Yaghi and coworkers for 3D imine-based COFs (COF-300, COF 303, LZU-79, and LZU-111). However, the vast majority of COFs are not able to crystallize into single crystals but instead are insoluble powders. The improvement of crystallinity of these polycrystalline materials can be improved through tuning the reversibility of the linkage formation to allow for corrective particle growth and self-healing of defects that arise during COF formation.

Conductivity

In a fully conjugated 2D COF material such as those synthesized from metallophthalocyanines and highly conjugated organic linkers, charge transport is increased both in-plane, as well as through the stacks, resulting in increased conductivity.

Integration of SBUs into a covalent framework results in the synergistic emergence of conductivities much greater than the monomeric values. The nature of the SBUs can improve conductivity. Through the use of highly conjugated linkers throughout the COF scaffold, the material can be engineered to be fully conjugated, enabling high charge carrier density as well as through- and in-plane charge transport.  For instance, Mirica and coworkers synthesized a COF material (NiPc-Pyr COF) from nickel phthalocyanine (NiPc) and pyrene organic linkers that had a conductivity of 2.51 x 10−3 S/m, which was several orders of magnitude larger than the undoped molecular NiPc, 10−11 S/m. A similar COF structure made by Jiang and coworkers, CoPc-Pyr COF, exhibited a conductivity of 3.69 x 10−3 S/m. In both previously mentioned COFs, the 2D lattice allows for full π-conjugation in the x and y directions as well as π-conduction along the z axis due to the fully conjugated, aromatic scaffold and π-π stacking, respectively. Emergent electrical conductivity in COF structures is especially important for applications such as catalysis and energy storage where quick and efficient charge transport is required for optimal performance.

Characterization

There exists a wide range of characterization methods for COF materials. There are several COF single crystals synthesized to date. For these highly crystalline materials, X-ray diffraction (XRD) is a powerful tool capable of determining COF crystal structure. The majority of COF materials suffer from decreased crystallinity so powder X-ray diffraction (PXRD) is used. In conjunction with simulated powder packing models, PXRD can determine COF crystal structure.

In order to verify and analyze COF linkage formation, various techniques can be employed such as infrared (IR) spectroscopy, and nuclear magnetic resonance (NMR) spectroscopy. Precursor and COF IR spectra enables comparison between vibrational peaks to ascertain that certain key bonds present in the COF linkages appear and that peaks of precursor functional groups disappear. In addition, solid-state NMR enables probing of linkage formation as well and is well suited for large, insoluble materials like COFs. Gas adsorption-desorption studies quantify the porosity of the material via calculation of the Brunauer–Emmett–Teller (BET) surface area and pore diameter from gas adsorption isotherms. Electron imagine techniques such as scanning electron microscope (SEM), and transmission electron microscopy (TEM) can resolve surface structure and morphology, and microstructural information, respectively. Scanning tunneling microscope (STM) and atomic force microscopy (AFM) have also been used to characterize COF microstructural information as well. Additionally, methods like X-ray photoelectron spectroscopy (XPS), inductively coupled plasma mass spectrometry (ICP-MS), and combustion analysis can be used to identify elemental composition and ratios.

Applications

Gas Storage and Separation

Due to the exceptional porosity of COFs, they have been used extensively in the storage and separation of gases such as hydrogen, methane, etc.

Hydrogen Storage

Omar M. Yaghi and William A. Goddard III reported COFs as exceptional hydrogen storage materials. They predicted the highest excess H2 uptakes at 77 K are 10.0 wt % at 80 bar for COF-105, and 10.0 wt % at 100 bar for COF-108, which have higher surface area and free volume, by grand canonical Monte Carlo (GCMC) simulations as a function of temperature and pressure. This is the highest value reported for associative H2 storage of any material. Thus 3-D COFs are most promising new candidates in the quest for practical H2 storage materials. In 2012, the lab of William A. Goddard III reported the uptake for COF102, COF103, and COF202 at 298 K and they also proposed new strategies to obtain higher interaction with H2. Such strategy consist on metalating the COF with alkaline metals such as Li. These complexes composed of Li, Na and K with benzene ligands (such as 1,3,5-benzenetribenzoate, the ligand used in MOF-177) have been synthesized by Krieck et al. and Goddard showed that the THF is important of their stability. If the metalation with alkaline is performed in the COFs, Goddard et al. calculated that some COFs can reach 2010 DOE gravimetric target in delivery units at 298 K of 4.5 wt %: COF102-Li (5.16 wt %), COF103-Li (4.75 wt %), COF102-Na (4.75 wt %) and COF103-Na (4.72 wt %). COFs also perform better in delivery units than MOFs because the best volumetric performance is for COF102-Na (24.9), COF102-Li (23.8), COF103-Na (22.8), and COF103-Li (21.7), all using delivery g H2/L units for 1–100 bar. These are the highest gravimetric molecular hydrogen uptakes for a porous material under these thermodynamic conditions.

Methane Storage

Omar M. Yaghi and William A. Goddard III also reported COFs as exceptional methane storage materials. The best COF in terms of total volume of CH4 per unit volume COF absorbent is COF-1, which can store 195 v/v at 298 K and 30 bar, exceeding the U.S. Department of Energy target for CH4 storage of 180 v/v at 298 K and 35 bar. The best COFs on a delivery amount basis (volume adsorbed from 5 to 100 bar) are COF-102 and COF-103 with values of 230 and 234 v(STP: 298 K, 1.01 bar)/v, respectively, making these promising materials for practical methane storage. More recently, new COFs with better delivery amount have been designed in the lab of William A. Goddard III, and they have been shown to be stable and overcome the DOE target in delivery basis. COF-103-Eth-trans and COF-102-Ant, are found to exceed the DOE target of 180 v(STP)/v at 35 bar for methane storage. They reported that using thin vinyl bridging groups aid performance by minimizing the interaction methane-COF at low pressure.

Gas Separation

In addition to storage, COF materials are exceptional at gas separation. For instance, COFs like imine-linked COF LZU1 and azine-linked COF ACOF-1 were used as a bilayer membrane for the selective separation of the following mixtures: H2/CO2, H2/N2, and H2/CH4. The COFs outperformed molecular sieves due to the inherent thermal and operational stability of the structures. It has also been shown that COFs inherently act as adsorbents, adhering to the gaseous molecules to enable storage and separation.

Optical properties

A highly ordered π-conjugation TP-COF, consisting of pyrene and triphenylene functionalities alternately linked in a mesoporous hexagonal skeleton, is highly luminescent, harvests a wide wavelength range of photons, and allows energy transfer and migration. Furthermore, TP-COF is electrically conductive and capable of repetitive on–off current switching at room temperature.

Porosity/surface-area effects

Most studies to date have focused on the development of synthetic methodologies with the aim of maximizing pore size and surface area for gas storage. That means the functions of COFs have not yet been well explored, but COFs can be used as catalysts, or for gas separation, etc.

Carbon capture

In 2015 the use of highly porous, catalyst-decorated COFs for converting carbon dioxide into carbon monoxide was reported. MOF under solvent-free conditions can also be used for catalytic activity in the cycloaddition of CO2 and epoxides into cyclic organic carbonates with enhanced catalyst recyclability.

Sensing

Due to defining molecule-framework interactions, COFs can be used as chemical sensors in a wide range of environments and applications. Properties of the COF change when their functionalities interact with various analytes enabling the materials to serve as devices in various conditions: as chemiresistive sensors, as well as electrochemical sensors for small molecules.

Catalysis

Due to the ability to introduce diverse functionality into COFs’ structure, catalytic sites can be fine-tuned in conjunction with other advantageous properties like conductivity and stability to afford efficient and selective catalysts. COFs have been used as heterogeneous catalysts in organic, electrochemical, as well as photochemical reactions.

Electrocatalysis

COFs have been studied as non-metallic electrocatalyst for energy-related catalysis, including carbon dioxide electro-reduction and water splitting reaction. However, such researches are still in the very earlier stage. Most of the efforts have been focusing on solving the key issues, such as conductivity, stability in electrochemical processes.

Energy Storage

A few COFs possess the stability and conductivity necessary to perform well in energy storage applications like lithium-ion batteries, and various different metal-ion batteries and cathodes.

Water filtration

A prototype 2 nanometer thick COF layer on a graphene substrate was used to filter dye from industrial wastewater. Once full, the COF can be cleaned and reused.

Computer file

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Computer_file

A computer file is a computer resource for recording data in a computer storage device, primarily identified by its file name. Just as words can be written to paper, so can data be written to a computer file. Files can be shared with and transferred between computers and mobile devices via removable media, networks, or the Internet.

Different types of computer files are designed for different purposes. A file may be designed to store an Image, a written message, a video, a computer program, or any wide variety of other kinds of data. Certain files can store multiple data types at once.

By using computer programs, a person can open, read, change, save, and close a computer file. Computer files may be reopened, modified, and copied an arbitrary number of times.

Files are typically organized in a file system, which tracks file locations on the disk and enables user access.

Etymology

 
The twin disk files of an IBM 305 system

The word "file" derives from the Latin filum ("a thread").

"File" was used in the context of computer storage as early as January 1940. In Punched Card Methods in Scientific Computation, W. J. Eckert stated, "The first extensive use of the early Hollerith Tabulator in astronomy was made by Comrie. He used it for building a table from successive differences, and for adding large numbers of harmonic terms". "Tables of functions are constructed from their differences with great efficiency, either as printed tables or as a file of punched cards."

In February 1950, in a Radio Corporation of America (RCA) advertisement in Popular Science magazine describing a new "memory" vacuum tube it had developed, RCA stated: "the results of countless computations can be kept 'on file' and taken out again. Such a 'file' now exists in a 'memory' tube developed at RCA Laboratories. Electronically it retains figures fed into calculating machines, holds them in storage while it memorizes new ones – speeds intelligent solutions through mazes of mathematics."

In 1952, "file" denoted, among other things, information stored on punched cards.

In early use, the underlying hardware, rather than the contents stored on it, was denominated a "file". For example, the IBM 350 disk drives were denominated "disk files". The introduction, circa 1961, by the Burroughs MCP and the MIT Compatible Time-Sharing System of the concept of a "file system" that managed several virtual "files" on one storage device is the origin of the contemporary denotation of the word. Although the contemporary "register file" demonstrates the early concept of files, its use has greatly decreased.

File contents

On most modern operating systems, files are organized into one-dimensional arrays of bytes. The format of a file is defined by its content since a file is solely a container for data.

On some platforms the format is indicated by its filename extension, specifying the rules for how the bytes must be organized and interpreted meaningfully. For example, the bytes of a plain text file (.txt in Windows) are associated with either ASCII or UTF-8 characters, while the bytes of image, video, and audio files are interpreted otherwise. Most file types also allocate a few bytes for metadata, which allows a file to carry some basic information about itself.

Some file systems can store arbitrary (not interpreted by the file system) file-specific data outside of the file format, but linked to the file, for example extended attributes or forks. On other file systems this can be done via sidecar files or software-specific databases. All those methods, however, are more susceptible to loss of metadata than container and archive file formats.

File size

At any instant in time, a file has a size, normally expressed as a number of bytes, that indicates how much storage is occupied by the file. In most modern operating systems the size can be any non-negative whole number of bytes up to a system limit. Many older operating systems kept track only of the number of blocks or tracks occupied by a file on a physical storage device. In such systems, software employed other methods to track the exact byte count (e.g., CP/M used a special control character, Ctrl-Z, to signal the end of text files).

The general definition of a file does not require that its size have any real meaning, however, unless the data within the file happens to correspond to data within a pool of persistent storage. A special case is a zero byte file; these files can be newly created files that have not yet had any data written to them, or may serve as some kind of flag in the file system, or are accidents (the results of aborted disk operations). For example, the file to which the link /bin/ls points in a typical Unix-like system probably has a defined size that seldom changes. Compare this with /dev/null which is also a file, but as a character special file, its size is not meaningful.

Organization of data in a file

Information in a computer file can consist of smaller packets of information (often called "records" or "lines") that are individually different but share some common traits. For example, a payroll file might contain information concerning all the employees in a company and their payroll details; each record in the payroll file concerns just one employee, and all the records have the common trait of being related to payroll—this is very similar to placing all payroll information into a specific filing cabinet in an office that does not have a computer. A text file may contain lines of text, corresponding to printed lines on a piece of paper. Alternatively, a file may contain an arbitrary binary image (a blob) or it may contain an executable.

The way information is grouped into a file is entirely up to how it is designed. This has led to a plethora of more or less standardized file structures for all imaginable purposes, from the simplest to the most complex. Most computer files are used by computer programs which create, modify or delete the files for their own use on an as-needed basis. The programmers who create the programs decide what files are needed, how they are to be used and (often) their names.

In some cases, computer programs manipulate files that are made visible to the computer user. For example, in a word-processing program, the user manipulates document files that the user personally names. Although the content of the document file is arranged in a format that the word-processing program understands, the user is able to choose the name and location of the file and provide the bulk of the information (such as words and text) that will be stored in the file.

Many applications pack all their data files into a single file called an archive file, using internal markers to discern the different types of information contained within. The benefits of the archive file are to lower the number of files for easier transfer, to reduce storage usage, or just to organize outdated files. The archive file must often be unpacked before next using.

Operations

The most basic operations that programs can perform on a file are:

  • Create a new file
  • Change the access permissions and attributes of a file
  • Open a file, which makes the file contents available to the program
  • Read data from a file
  • Write data to a file
  • Delete a file
  • Close a file, terminating the association between it and the program
  • Truncate a file, shortening it to a specified size within the file system without rewriting any content

Files on a computer can be created, moved, modified, grown, shrunk (truncated), and deleted. In most cases, computer programs that are executed on the computer handle these operations, but the user of a computer can also manipulate files if necessary. For instance, Microsoft Word files are normally created and modified by the Microsoft Word program in response to user commands, but the user can also move, rename, or delete these files directly by using a file manager program such as Windows Explorer (on Windows computers) or by command lines (CLI).

In Unix-like systems, user space programs do not operate directly, at a low level, on a file. Only the kernel deals with files, and it handles all user-space interaction with files in a manner that is transparent to the user-space programs. The operating system provides a level of abstraction, which means that interaction with a file from user-space is simply through its filename (instead of its inode). For example, rm filename will not delete the file itself, but only a link to the file. There can be many links to a file, but when they are all removed, the kernel considers that file's memory space free to be reallocated. This free space is commonly considered a security risk (due to the existence of file recovery software). Any secure-deletion program uses kernel-space (system) functions to wipe the file's data.

File moves within a file system complete almost immediately because the data content does not need to be rewritten. Only the paths need to be changed.

Moving methods

There are two distinct implementations of file moves.

When moving files between devices or partitions, some file managing software deletes each selected file from the source directory individually after being transferred, while other software deletes all files at once only after every file has been transferred.

With the mv command for instance, the former method is used when selecting files individually, possibly with the use of wildcards (example: mv -n sourcePath/* targetPath, while the latter method is used when selecting entire directories (example: mv -n sourcePath targetPath). Microsoft Windows Explorer uses the former method for mass storage filemoves, but the latter method using Media Transfer Protocol, as described in Media Transfer Protocol § File move behaviour.

The former method (individual deletion from source) has the benefit that space is released from the source device or partition imminently after the transfer has begun, meaning after the first file is finished. With the latter method, space is only freed after the transfer of the entire selection has finished.

If an incomplete file transfer with the latter method is aborted unexpectedly, perhaps due to an unexpected power-off, system halt or disconnection of a device, no space will have been freed up on the source device or partition. The user would need to merge the remaining files from the source, including the incompletely written (truncated) last file.

With the individual deletion method, the file moving software also does not need to cumulatively keep track of all files finished transferring for the case that a user manually aborts the file transfer. A file manager using the latter (afterwards deletion) method will have to only delete the files from the source directory that have already finished transferring.

Identifying and organizing

Files and folders arranged in a hierarchy

In modern computer systems, files are typically accessed using names (filenames). In some operating systems, the name is associated with the file itself. In others, the file is anonymous, and is pointed to by links that have names. In the latter case, a user can identify the name of the link with the file itself, but this is a false analogue, especially where there exists more than one link to the same file.

Files (or links to files) can be located in directories. However, more generally, a directory can contain either a list of files or a list of links to files. Within this definition, it is of paramount importance that the term "file" includes directories. This permits the existence of directory hierarchies, i.e., directories containing sub-directories. A name that refers to a file within a directory must be typically unique. In other words, there must be no identical names within a directory. However, in some operating systems, a name may include a specification of type that means a directory can contain an identical name for more than one type of object such as a directory and a file.

In environments in which a file is named, a file's name and the path to the file's directory must uniquely identify it among all other files in the computer system—no two files can have the same name and path. Where a file is anonymous, named references to it will exist within a namespace. In most cases, any name within the namespace will refer to exactly zero or one file. However, any file may be represented within any namespace by zero, one or more names.

Any string of characters may be a well-formed name for a file or a link depending upon the context of application. Whether or not a name is well-formed depends on the type of computer system being used. Early computers permitted only a few letters or digits in the name of a file, but modern computers allow long names (some up to 255 characters) containing almost any combination of unicode letters or unicode digits, making it easier to understand the purpose of a file at a glance. Some computer systems allow file names to contain spaces; others do not. Case-sensitivity of file names is determined by the file system. Unix file systems are usually case sensitive and allow user-level applications to create files whose names differ only in the case of characters. Microsoft Windows supports multiple file systems, each with different policies regarding case-sensitivity. The common FAT file system can have multiple files whose names differ only in case if the user uses a disk editor to edit the file names in the directory entries. User applications, however, will usually not allow the user to create multiple files with the same name but differing in case.

Most computers organize files into hierarchies using folders, directories, or catalogs. The concept is the same irrespective of the terminology used. Each folder can contain an arbitrary number of files, and it can also contain other folders. These other folders are referred to as subfolders. Subfolders can contain still more files and folders and so on, thus building a tree-like structure in which one "master folder" (or "root folder" — the name varies from one operating system to another) can contain any number of levels of other folders and files. Folders can be named just as files can (except for the root folder, which often does not have a name). The use of folders makes it easier to organize files in a logical way.

When a computer allows the use of folders, each file and folder has not only a name of its own, but also a path, which identifies the folder or folders in which a file or folder resides. In the path, some sort of special character—such as a slash—is used to separate the file and folder names. For example, in the illustration shown in this article, the path /Payroll/Salaries/Managers uniquely identifies a file called Managers in a folder called Salaries, which in turn is contained in a folder called Payroll. The folder and file names are separated by slashes in this example; the topmost or root folder has no name, and so the path begins with a slash (if the root folder had a name, it would precede this first slash).

Many computer systems use extensions in file names to help identify what they contain, also known as the file type. On Windows computers, extensions consist of a dot (period) at the end of a file name, followed by a few letters to identify the type of file. An extension of .txt identifies a text file; a .doc extension identifies any type of document or documentation, commonly in the Microsoft Word file format; and so on. Even when extensions are used in a computer system, the degree to which the computer system recognizes and heeds them can vary; in some systems, they are required, while in other systems, they are completely ignored if they are presented.

Protection

Many modern computer systems provide methods for protecting files against accidental and deliberate damage. Computers that allow for multiple users implement file permissions to control who may or may not modify, delete, or create files and folders. For example, a given user may be granted only permission to read a file or folder, but not to modify or delete it; or a user may be given permission to read and modify files or folders, but not to execute them. Permissions may also be used to allow only certain users to see the contents of a file or folder. Permissions protect against unauthorized tampering or destruction of information in files, and keep private information confidential from unauthorized users.

Another protection mechanism implemented in many computers is a read-only flag. When this flag is turned on for a file (which can be accomplished by a computer program or by a human user), the file can be examined, but it cannot be modified. This flag is useful for critical information that must not be modified or erased, such as special files that are used only by internal parts of the computer system. Some systems also include a hidden flag to make certain files invisible; this flag is used by the computer system to hide essential system files that users should not alter.

Storage

Any file that has any useful purpose must have some physical manifestation. That is, a file (an abstract concept) in a real computer system must have a real physical analogue if it is to exist at all.

In physical terms, most computer files are stored on some type of data storage device. For example, most operating systems store files on a hard disk. Hard disks have been the ubiquitous form of non-volatile storage since the early 1960s. Where files contain only temporary information, they may be stored in RAM. Computer files can be also stored on other media in some cases, such as magnetic tapes, compact discs, Digital Versatile Discs, Zip drives, USB flash drives, etc. The use of solid state drives is also beginning to rival the hard disk drive.

In Unix-like operating systems, many files have no associated physical storage device. Examples are /dev/null and most files under directories /dev, /proc and /sys. These are virtual files: they exist as objects within the operating system kernel.

As seen by a running user program, files are usually represented either by a file control block or by a file handle. A file control block (FCB) is an area of memory which is manipulated to establish a filename etc. and then passed to the operating system as a parameter; it was used by older IBM operating systems and early PC operating systems including CP/M and early versions of MS-DOS. A file handle is generally either an opaque data type or an integer; it was introduced in around 1961 by the ALGOL-based Burroughs MCP running on the Burroughs B5000 but is now ubiquitous.

File corruption

Photo of a child
Original JPEG file
 
Corrupted JPEG file, with a single bit flipped (turned from 0 to 1, or vice versa)
While there is visible corruption on the second file, one can still make out what the original image might have looked like.

When a file is said to be corrupted, it is because its contents have been saved to the computer in such a way that they cannot be properly read, either by a human or by software. Depending on the extent of the damage, the original file can sometimes be recovered, or at least partially understood. A file may be created corrupt, or it may be corrupted at a later point through overwriting.

There are many ways by which a file can become corrupted. Most commonly, the issue happens in the process of writing the file to a disk. For example, if an image-editing program unexpectedly crashes while saving an image, that file may be corrupted because the program could not save its entirety. The program itself might warn the user that there was an error, allowing for another attempt at saving the file. Some other examples of reasons for which files become corrupted include:

  • The computer itself shutting down unexpectedly (for example, due to a power loss) with open files, or files in the process of being saved;
  • A download being interrupted before it was completed;
  • Due to a bad sector on the hard drive;
  • The user removing a flash drive (such as a USB stick) without properly unmounting (commonly referred to as "safely removing");
  • Malicious software, such as a computer virus;
  • A flash drive becoming too old.

Although file corruption usually happens accidentally, it may also be done on purpose as a mean of procrastination, as to fool someone else into thinking an assignment was ready at an earlier date, potentially gaining time to finish said assignment. There are services that provide on demand file corruption, which essentially fill a given file with random data so that it cannot be opened or read, yet still seems legitimate.

One of the most effective countermeasures for unintentional file corruption is backing up important files. In the event of an important file becoming corrupted, the user can simply replace it with the backed up version.

Backup

When computer files contain information that is extremely important, a back-up process is used to protect against disasters that might destroy the files. Backing up files simply means making copies of the files in a separate location so that they can be restored if something happens to the computer, or if they are deleted accidentally.

There are many ways to back up files. Most computer systems provide utility programs to assist in the back-up process, which can become very time-consuming if there are many files to safeguard. Files are often copied to removable media such as writable CDs or cartridge tapes. Copying files to another hard disk in the same computer protects against failure of one disk, but if it is necessary to protect against failure or destruction of the entire computer, then copies of the files must be made on other media that can be taken away from the computer and stored in a safe, distant location.

The grandfather-father-son backup method automatically makes three back-ups; the grandfather file is the oldest copy of the file and the son is the current copy.

File systems and file managers

The way a computer organizes, names, stores and manipulates files is globally referred to as its file system. Most computers have at least one file system. Some computers allow the use of several different file systems. For instance, on newer MS Windows computers, the older FAT-type file systems of MS-DOS and old versions of Windows are supported, in addition to the NTFS file system that is the normal file system for recent versions of Windows. Each system has its own advantages and disadvantages. Standard FAT allows only eight-character file names (plus a three-character extension) with no spaces, for example, whereas NTFS allows much longer names that can contain spaces. You can call a file "Payroll records" in NTFS, but in FAT you would be restricted to something like payroll.dat (unless you were using VFAT, a FAT extension allowing long file names).

File manager programs are utility programs that allow users to manipulate files directly. They allow you to move, create, delete and rename files and folders, although they do not actually allow you to read the contents of a file or store information in it. Every computer system provides at least one file-manager program for its native file system. For example, File Explorer (formerly Windows Explorer) is commonly used in Microsoft Windows operating systems, and Nautilus is common under several distributions of Linux.

Rydberg atom

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Rydberg_atom Figure 1: Electron orbi...