Search This Blog

Thursday, August 8, 2024

History of the battery

From Wikipedia, the free encyclopedia
A voltaic pile, the first chemical battery

Batteries provided the primary source of electricity before the development of electric generators and electrical grids around the end of the 19th century. Successive improvements in battery technology facilitated major electrical advances, from early scientific studies to the rise of telegraphs and telephones, eventually leading to portable computers, mobile phones, electric cars, and many other electrical devices.

Students and engineers developed several commercially important types of battery. "Wet cells" were open containers that held liquid electrolyte and metallic electrodes. When the electrodes were completely consumed, the wet cell was renewed by replacing the electrodes and electrolyte. Open containers are unsuitable for mobile or portable use. Wet cells were used commercially in the telegraph and telephone systems. Early electric cars used semi-sealed wet cells.

One important classification for batteries is by their life cycle. "Primary" batteries can produce current as soon as assembled, but once the active elements are consumed, they cannot be electrically recharged. The development of the lead-acid battery and subsequent "secondary" or "chargeable" types allowed energy to be restored to the cell, extending the life of permanently assembled cells. The introduction of nickel and lithium based batteries in the latter half of the 20th century made the development of innumerable portable electronic devices feasible, from powerful flashlights to mobile phones. Very large stationary batteries find some applications in grid energy storage, helping to stabilize electric power distribution networks.

Invention

From the mid 18th century on, before there were batteries, experimenters used Leyden jars to store electrical charge. As an early form of capacitor, Leyden jars, unlike electrochemical cells, stored their charge physically and would release it all at once. Many experimenters took to hooking several Leyden jars together to create a stronger charge and one of them, the colonial American inventor Benjamin Franklin, may have been the first to call his grouping an "electrical battery", a play on the military term for weapons functioning together.

Based on some findings by Luigi Galvani, Alessandro Volta, a friend and fellow scientist, believed observed electrical phenomena were caused by two different metals joined by a moist intermediary. He verified this hypothesis through experiments and published the results in 1791. In 1800, Volta invented the first true battery, storing and releasing a charge through a chemical reaction instead of physically, which came to be known as the voltaic pile. The voltaic pile consisted of pairs of copper and zinc discs piled on top of each other, separated by a layer of cloth or cardboard soaked in brine (i.e., the electrolyte). Unlike the Leyden jar, the voltaic pile produced continuous electricity and stable current, and lost little charge over time when not in use, though his early models could not produce a voltage strong enough to produce sparks. He experimented with various metals and found that zinc and silver gave the best results.

Volta believed the current was the result of two different materials simply touching each other – an obsolete scientific theory known as contact tension – and not the result of chemical reactions. As a consequence, he regarded the corrosion of the zinc plates as an unrelated flaw that could perhaps be fixed by changing the materials somehow. However, no scientist ever succeeded in preventing this corrosion. In fact, it was observed that the corrosion was faster when a higher current was drawn. This suggested that the corrosion was actually integral to the battery's ability to produce a current. This, in part, led to the rejection of Volta's contact tension theory in favor of the electrochemical theory. Volta's illustrations of his Crown of Cups and voltaic pile have extra metal disks, now known to be unnecessary, on both the top and bottom. The figure associated with this section, of the zinc-copper voltaic pile, has the modern design, an indication that "contact tension" is not the source of electromotive force for the voltaic pile.

Volta's original pile models had some technical flaws, one of them involving the electrolyte leaking and causing short-circuits due to the weight of the discs compressing the brine-soaked cloth. A Scotsman named William Cruickshank solved this problem by laying the elements in a box instead of piling them in a stack. This was known as the trough battery. Volta himself invented a variant that consisted of a chain of cups filled with a salt solution, linked together by metallic arcs dipped into the liquid. This was known as the Crown of Cups. These arcs were made of two different metals (e.g., zinc and copper) soldered together. This model also proved to be more efficient than his original piles, though it did not prove as popular.

A zinc-copper voltaic pile

Another problem with Volta's batteries was short battery life (an hour's worth at best), which was caused by two phenomena. The first was that the current produced electrolyzed the electrolyte solution, resulting in a film of hydrogen bubbles forming on the copper, which steadily increased the internal resistance of the battery (this effect, called polarization, is counteracted in modern cells by additional measures). The other was a phenomenon called local action, wherein minute short-circuits would form around impurities in the zinc, causing the zinc to degrade. The latter problem was solved in 1835 by the English inventor William Sturgeon, who found that amalgamated zinc, whose surface had been treated with some mercury, did not suffer from local action.

Despite its flaws, Volta's batteries provide a steadier current than Leyden jars, and made possible many new experiments and discoveries, such as the first electrolysis of water by the English surgeon Anthony Carlisle and the English chemist William Nicholson.

First practical batteries

Daniell cell

Schematic representation of Daniell's original cell

An English professor of chemistry named John Frederic Daniell found a way to solve the hydrogen bubble problem in the Voltaic Pile by using a second electrolyte to consume the hydrogen produced by the first. In 1836, he invented the Daniell cell, which consists of a copper pot filled with a copper sulfate solution, in which is immersed an unglazed earthenware container filled with sulfuric acid and a zinc electrode. The earthenware barrier is porous, which allows ions to pass through but keeps the solutions from mixing.

The Daniell cell was a great improvement over the existing technology used in the early days of battery development and was the first practical source of electricity. It provides a longer and more reliable current than the Voltaic cell. It is also safer and less corrosive. It has an operating voltage of roughly 1.1 volts. It soon became the industry standard for use, especially with the new telegraph networks.

The Daniell cell was also used as the first working standard for definition of the volt, which is the unit of electromotive force.

Bird's cell

A version of the Daniell cell was invented in 1837 by the Guy's Hospital physician Golding Bird who used a plaster of Paris barrier to keep the solutions separate. Bird's experiments with this cell were of some importance to the new discipline of electrometallurgy.

Porous pot cell

Porous pot cell

The porous pot version of the Daniell cell was invented by John Dancer, a Liverpool instrument maker, in 1838. It consists of a central zinc anode dipped into a porous earthenware pot containing a zinc sulfate solution. The porous pot is, in turn, immersed in a solution of copper sulfate contained in a copper can, which acts as the cell's cathode. The use of a porous barrier allows ions to pass through but keeps the solutions from mixing.

Gravity cell

A 1919 illustration of a gravity cell. This particular variant is also known as a crowfoot cell due to distinctive shape of the electrodes

In the 1860s, a Frenchman named Callaud invented a variant of the Daniell cell called the gravity cell. This simpler version dispensed with the porous barrier. This reduces the internal resistance of the system and, thus, the battery yields a stronger current. It quickly became the battery of choice for the American and British telegraph networks, and was widely used until the 1950s.

The gravity cell consists of a glass jar, in which a copper cathode sits on the bottom and a zinc anode is suspended beneath the rim. Copper sulfate crystals are scattered around the cathode and then the jar is filled with distilled water. As the current is drawn, a layer of zinc sulfate solution forms at the top around the anode. This top layer is kept separate from the bottom copper sulfate layer by its lower density and by the polarity of the cell.

The zinc sulfate layer is clear in contrast to the deep blue copper sulfate layer, which allows a technician to measure the battery life with a glance. On the other hand, this setup means the battery can be used only in a stationary appliance, or else the solutions mix or spill. Another disadvantage is that a current has to be continually drawn to keep the two solutions from mixing by diffusion, so it is unsuitable for intermittent use.

Poggendorff cell

The German scientist Johann Christian Poggendorff overcame the problems with separating the electrolyte and the depolariser using a porous earthenware pot in 1842. In the Poggendorff cell, sometimes called Grenet Cell due to the works of Eugene Grenet around 1859, the electrolyte is dilute sulphuric acid and the depolariser is chromic acid. The two acids are physically mixed together, eliminating the porous pot. The positive electrode (cathode) is two carbon plates, with a zinc plate (negative or anode) positioned between them. Because of the tendency of the acid mixture to react with the zinc, a mechanism is provided to raise the zinc electrode clear of the acids.

The cell provides 1.9 volts. It was popular with experimenters for many years due to its relatively high voltage; greater ability to produce a consistent current and lack of any fumes, but the relative fragility of its thin glass enclosure and the necessity of having to raise the zinc plate when the cell is not in use eventually saw it fall out of favour. The cell was also known as the 'chromic acid cell', but principally as the 'bichromate cell'. This latter name came from the practice of producing the chromic acid by adding sulphuric acid to potassium dichromate, even though the cell itself contains no dichromate.

The Fuller cell was developed from the Poggendorff cell. Although the chemistry is principally the same, the two acids are once again separated by a porous container and the zinc is treated with mercury to form an amalgam.

Grove cell

The Welshman William Robert Grove invented the Grove cell in 1839. It consists of a zinc anode dipped in sulfuric acid and a platinum cathode dipped in nitric acid, separated by porous earthenware. The Grove cell provides a high current and nearly twice the voltage of the Daniell cell, which made it the favoured cell of the American telegraph networks for a time. However, it gives off poisonous nitric oxide fumes when operated. The voltage also drops sharply as the charge diminishes, which became a liability as telegraph networks grew more complex. Platinum was and still is very expensive.

Dun cell

Alfred Dun 1885, nitro-muriatic acid (aqua regis) – iron and carbon:

In the new element there can be used advantageously as exciting-liquid in the first case such solutions as have in a concentrated condition great depolarizing-power, which effect the whole depolarization chemically without necessitating the mechanical expedient of increased carbon surface. It is preferred to use iron as the positive electrode, and as exciting-liquid nitro muriatic acid (aqua regis), the mixture consisting of muriatic and nitric acids. The nitro-muriatic acid, as explained above, serves for filling both cells. For the carbon-cells it is used strong or very slightly diluted, but for the other cells very diluted, (about one-twentieth, or at the most one-tenth). The element containing in one cell carbon and concentrated nitro-muriatic acid and in the other cell iron and dilute nitro-muriatic acid remains constant for at least twenty hours when employed for electric incandescent lighting.

Rechargeable batteries and dry cells

Lead-acid

19th-century illustration of Planté's original lead-acid cell

Up to this point, all existing batteries would be permanently drained when all their chemical reactants were spent. In 1859, Gaston Planté invented the lead–acid battery, the first-ever battery that could be recharged by passing a reverse current through it. A lead-acid cell consists of a lead anode and a lead dioxide cathode immersed in sulfuric acid. Both electrodes react with the acid to produce lead sulfate, but the reaction at the lead anode releases electrons whilst the reaction at the lead dioxide consumes them, thus producing a current. These chemical reactions can be reversed by passing a reverse current through the battery, thereby recharging it.

Planté's first model consisted of two lead sheets separated by rubber strips and rolled into a spiral. His batteries were first used to power the lights in train carriages while stopped at a station. In 1881, Camille Alphonse Faure invented an improved version that consists of a lead grid lattice into which is pressed a lead oxide paste, forming a plate. Multiple plates can be stacked for greater performance. This design is easier to mass-produce.

Compared to other batteries, Planté's is rather heavy and bulky for the amount of energy it can hold. However, it can produce remarkably large currents in surges, because it has very low internal resistance, meaning that a single battery can be used to power multiple circuits.

The lead-acid battery is still used today in automobiles and other applications where weight is not a big factor. The basic principle has not changed since 1859. In the early 1930s, a gel electrolyte (instead of a liquid) produced by adding silica to a charged cell was used in the LT battery of portable vacuum-tube radios. In the 1970s, "sealed" versions became common (commonly known as a "gel cell" or "SLA"), allowing the battery to be used in different positions without failure or leakage.

Today cells are classified as "primary" if they produce a current only until their chemical reactants are exhausted, and "secondary" if the chemical reactions can be reversed by recharging the cell. The lead-acid cell was the first "secondary" cell.

Leclanché cell

A 1912 illustration of a Leclanché cell

In 1866, Georges Leclanché invented a battery that consists of a zinc anode and a manganese dioxide cathode wrapped in a porous material, dipped in a jar of ammonium chloride solution. The manganese dioxide cathode has a little carbon mixed into it as well, which improves conductivity and absorption. It provided a voltage of 1.4 volts. This cell achieved very quick success in telegraphy, signaling, and electric bell work.

The dry cell form was used to power early telephones—usually from an adjacent wooden box affixed to fit batteries before telephones could draw power from the telephone line itself. The Leclanché cell can not provide a sustained current for very long. In lengthy conversations, the battery would run down, rendering the conversation inaudible. This is because certain chemical reactions in the cell increase the internal resistance and, thus, lower the voltage.

Zinc-carbon cell, the first dry cell

Many experimenters tried to immobilize the electrolyte of an electrochemical cell to make it more convenient to use. The Zamboni pile of 1812 is a high-voltage dry battery but capable of delivering only minute currents. Various experiments were made with cellulose, sawdust, spun glass, asbestos fibers, and gelatine.

In 1886, Carl Gassner obtained a German patent on a variant of the Leclanché cell, which came to be known as the dry cell because it does not have a free liquid electrolyte. Instead, the ammonium chloride is mixed with plaster of Paris to create a paste, with a small amount of zinc chloride added in to extend the shelf life. The manganese dioxide cathode is dipped in this paste, and both are sealed in a zinc shell, which also acts as the anode. In November 1887, he obtained U.S. patent 373,064 for the same device.

Unlike previous wet cells, Gassner's dry cell is more solid, does not require maintenance, does not spill, and can be used in any orientation. It provides a potential of 1.5 volts. The first mass-produced model was the Columbia dry cell, first marketed by the National Carbon Company in 1896. The NCC improved Gassner's model by replacing the plaster of Paris with coiled cardboard, an innovation that left more space for the cathode and made the battery easier to assemble. It was the first convenient battery for the masses and made portable electrical devices practical, and led directly to the invention of the flashlight.

The zinc–carbon battery (as it came to be known) is still manufactured today.

In parallel, in 1887 Wilhelm Hellesen developed his own dry cell design. It has been claimed that Hellesen's design preceded that of Gassner.

In 1887, a dry-battery was developed by Sakizō Yai (屋井 先蔵) of Japan, then patented in 1892. In 1893, Sakizō Yai's dry-battery was exhibited in World's Columbian Exposition and commanded considerable international attention.

NiCd, the first alkaline battery

In 1899, a Swedish scientist named Waldemar Jungner invented the nickel–cadmium battery, a rechargeable battery that has nickel and cadmium electrodes in a potassium hydroxide solution; the first battery to use an alkaline electrolyte. It was commercialized in Sweden in 1910 and reached the United States in 1946. The first models were robust and had significantly better energy density than lead-acid batteries, but were much more expensive.

20th century: new technologies and ubiquity

Size Year introduced
D 1898
AA 1907
AAA 1911
9V 1956

Nickel-iron

Nickel-iron batteries manufactured between 1972 and 1975 under the "Exide" brand, originally developed in 1901 by Thomas Edison.
A set of modern batteries

Waldemar Jungner patented a nickel–iron battery in 1899, the same year as his Ni-Cad battery patent, but found it to be inferior to its cadmium counterpart and, as a consequence, never bothered developing it. It produced a lot more hydrogen gas when being charged, meaning it could not be sealed, and the charging process was less efficient (it was, however, cheaper).

Seeing a way to make a profit in the already competitive lead-acid battery market, Thomas Edison worked in the 1890s on developing an alkaline based battery that he could get a patent on. Edison thought that if he produced a lightweight and durable battery electric cars would become the standard, with his firm as its main battery vendor. After many experiments, and probably borrowing from Jungner's design, he patented an alkaline based nickel–iron battery in 1901. However, customers found his first model of the alkaline nickel–iron battery to be prone to leakage leading to short battery life, and it did not outperform the lead-acid cell by much either. Although Edison was able to produce a more reliable and powerful model seven years later, by this time the inexpensive and reliable Model T Ford had made gasoline engine cars the standard. Nevertheless, Edison's battery achieved great success in other applications such as electric and diesel-electric rail vehicles, providing backup power for railroad crossing signals, or to provide power for the lamps used in mines.

Common alkaline batteries

Until the late 1950s, the zinc–carbon battery continued to be a popular primary cell battery, but its relatively low battery life hampered sales. The Canadian engineer Lewis Urry, working for the Union Carbide, first at the National Carbon Co. in Ontario and, by 1955, at the National Carbon Company Parma Research Laboratory in Cleveland, Ohio, was tasked with finding a way to extend the life of zinc-carbon batteries. Building on earlier work by Edison, Urry decided instead that alkaline batteries held more promise. Until then, longer-lasting alkaline batteries were unfeasibly expensive. Urry's battery consists of a manganese dioxide cathode and a powdered zinc anode with an alkaline electrolyte. Using powdered zinc gives the anode a greater surface area. These batteries were put on the market in 1959.

Nickel-hydrogen and nickel metal-hydride

The nickel–hydrogen battery entered the market as an energy-storage subsystem for commercial communication satellites.

The first consumer grade nickel–metal hydride batteries (NiMH) for smaller applications appeared on the market in 1989 as a variation of the 1970s nickel–hydrogen battery. NiMH batteries tend to have longer lifespans than NiCd batteries (and their lifespans continue to increase as manufacturers experiment with new alloys) and, since cadmium is toxic, NiMH batteries are less damaging to the environment.

Alkali metal-ion batteries

Lithium-ion battery
Curve of price and capacity of lithium-ion batteries over time; the price of these batteries declined by 97% in three decades.

Lithium is the alkali metal with lowest density and with the greatest electrochemical potential and energy-to-weight ratio. The low atomic weight and small size of its ions also speeds its diffusion, likely making it an ideal battery material. Experimentation with lithium batteries began in 1912 under American physical chemist Gilbert N. Lewis, but commercial lithium batteries did not come to market until the 1970s in the form of the lithium-ion battery. Three volt lithium primary cells such as the CR123A type and three volt button cells are still widely used, especially in cameras and very small devices.

Three important developments regarding lithium batteries occurred in the 1980s. In 1980, an American chemist, John B. Goodenough, discovered the LiCoO2 (Lithium cobalt oxide) cathode (positive lead) and a Moroccan research scientist, Rachid Yazami, discovered the graphite anode (negative lead) with the solid electrolyte. In 1981, Japanese chemists Tokio Yamabe and Shizukuni Yata discovered a novel nano-carbonacious-PAS (polyacene) and found that it was very effective for the anode in the conventional liquid electrolyte. This led a research team managed by Akira Yoshino of Asahi Chemical, Japan, to build the first lithium-ion battery prototype in 1985, a rechargeable and more stable version of the lithium battery; Sony commercialized the lithium-ion battery in 1991. In 2019, John Goodenough, Stanley Whittingham, and Akira Yoshino, were awarded the Nobel Prize in Chemistry, for their development of lithium-ion batteries.

In 1997, the lithium polymer battery was released by Sony and Asahi Kasei. These batteries hold their electrolyte in a solid polymer composite instead of in a liquid solvent, and the electrodes and separators are laminated to each other. The latter difference allows the battery to be encased in a flexible wrapping instead of in a rigid metal casing, which means such batteries can be specifically shaped to fit a particular device. This advantage has favored lithium polymer batteries in the design of portable electronic devices such as mobile phones and personal digital assistants, and of radio-controlled aircraft, as such batteries allow for a more flexible and compact design. They generally have a lower energy density than normal lithium-ion batteries.

High costs and concerns about mineral extraction associated with lithium chemistry have renewed interest in sodium-ion battery development, with early electric vehicle product launches in 2023.

Wednesday, August 7, 2024

Telecoms Package

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Telecoms_Package

The Telecoms Package was the review of the European Union Telecommunications Framework from 2007 – 2009. The objective of the review was to update the EU Telecoms Framework of 2002 and to create a common set of regulations for the telecoms industry across all 27 EU member states. The review consisted of a package of directives addressing the regulation of service provision, access, interconnection, users' contractual rights and users' privacy, as well as a regulation creating a new European regulatory body (BEREC).

The update to the telecoms regulations was needed to address the growth of broadband Internet. It was intended merely to address structural regulation and competitive issues concerning the broadband providers and the provision of spectrum. The Telecoms Package created a new pan-European agency called Body of European Regulators of Electronic Communications (BEREC) overseeing telecoms regulation in the member states. It provided for member states to set minimum quality of service levels for broadband network transmission. It harmonised European contractual rights for telephone and Internet subscribers. These rights included the ability to switch telephone operators within 24 hours of giving notice, and retaining the phone number. Broadband and phone providers are obligated to limit the contract term to 12 months. Subscribers are to be notified of data privacy breaches.

The Telecoms Package became subject to several political controversies, including disputes over the provision of access to infrastructure by dominant broadband providers. However, the most significant controversies concerned copyright and net neutrality.

The controversy over copyright arose because of an attempt to put in amendments mandating Internet service providers to enforce copyright. It was argued that these amendments sought to implement a three-strikes regime. There was a public political argument over this matter. The debate eventually centred on one single counter-amendment, known as Amendment 138. The outcome was that the package was forced to go to three readings in the European Parliament, and a compromise amendment was drafted, with the agreement of the three European institutions – Parliament, Commission and Council. This compromise amendment is sometimes now known as the 'freedom provision'.

The net neutrality controversy arose out of changes made to transparency requirements for broadband providers, where, it was argued, those changes could permit the providers to alter quality-of-service or favour or discriminate against other players.

The Telecoms Package is known in German as Telekom-Paket, in French as Paquet Telecom, in Spanish as Paquete Telecom, and in Swedish as Telekompaketet.

Legislative history

The legislation that comprises the Telecoms Package, as published in the Official Journal of the European Union is:

The Telecoms Package was presented by Viviane Reding, the European Commissioner for Information Society, to the European Parliament in Strasbourg 13 November 2007.

The draft legislation that was presented to the European Parliament was:

Proposal for a directive of the European Parliament and of the Council amending Directives 2002/21/EC on a common regulatory framework for electronic communications networks and services, 2002/19/EC on access to, and interconnection of, electronic communications networks and services, and 2002/20/EC on the authorisation of electronic communications networks and services

Proposal for a directive of the European Parliament and of the Council amending Directive 2002/22/EC on universal service and users' rights relating to electronic communications networks, Directive 2002/58/EC concerning the processing of personal data and the protection of privacy in the electronic communications sector and Regulation (EC) No 2006/2004 on consumer protection cooperation

Proposal for a regulation of the European Parliament and of the Council establishing the European Electronic Communications Market Authority

The Telecoms Package went through three readings in the European Parliament. The First Reading concluded on 24 September 2008. The second reading concluded on 5 May 2009. The third reading, also known as the conciliation process, concluded at midnight on 5 November 2009.

The entire package was finally adopted by a majority vote in the European Parliament on 24 November 2009. This was, however, a legal technicality. The critical policy issues had already been decided during the three readings.

The Telecoms Package entered into European law on 18 December 2009 (the date on which it was published in the Official Journal), after which member states had 18 months to implement its provisions in national law.

The Telecoms Package was a complex piece of legislation. It was intended to update many aspects of telecoms regulation. It combined earlier directives from 2002 into two new bundles. The Framework, Access and Authorisation directives from 2002, were put into one new directive. The Universal Services and e-Privacy directives, also from 2002, were bundled together into another new directive.

In the European Commission's draft of 13 November 2007, there were two amendments that attempted to insert support for copyright, notably that EU member states should mandate their broadband providers to co-operate with rights-holders and favouring a 'three strikes' or graduated response regime. These two amendments were Annex 1, point 19 of the Authorisation directive and Amendment 20.6 of the Universal Services directive. They sparked a major political controversy over the enforcement of copyright on the Internet.

The copyright controversy became public during the first reading of European Parliament. It came to dominate the political debate and was the subject of a vocal activist campaign led by La Quadrature du Net. It was only resolved during the third reading, when the European Parliament drafted a new provision that reminded member state governments of their obligations under the European Convention of Human Rights, notably the right to due process.

Amendment 138

The famous (or infamous) Amendment 138 was tabled to highlight the problem of copyright and with the aim of stopping a three strikes regime being legitimated in European Union legislation.

Amendment 138 was an amendment tabled to the Framework directive, that sought to mandate a judicial ruling in cases where Internet access would be cut off. It was deliberately framed to target other proposals for copyright measures – the so-called 'three-strikes'. The text of amendment 138 was:

“ applying the principle that no restriction may be imposed on the fundamental rights and freedoms of end-users, without a prior ruling by the judicial authorities, notably in accordance with Article 11 of the Charter of Fundamental Rights of the European Union on freedom of expression and information, save when public security is threatened in which case the ruling may be subsequent".

Amendment 138 was adopted by the European Parliament in the first reading plenary vote on 24 September 2008. This created an inter-institutional stand-off between the Parliament on the one hand, and the Commission and the Council of Ministers, on the other.

In the second reading, on 5 May 2009, the European Parliament again voted for Amendment 138.

In the Third Reading, the only issue under discussion was Amendment 138 and how to handle the copyright issue. A compromise provision was finally agreed by all three EU institutions at midnight on 4 November 2009. This provision is Article 1.3a of the Framework directive. It is sometimes known as the 'Freedom Provision'.

The text of Article 1.3a (the so-called "Freedom Provision") is:

"3a. Measures taken by Member States regarding end-users access' to, or use of, services and applications through electronic communications networks shall respect the fundamental rights and freedoms of natural persons, as guaranteed by the European Convention for the Protection of Human Rights and Fundamental Freedoms and general principles of Community law. Any of these measures regarding end-users' access to, or use of, services and applications through electronic communications networks liable to restrict those fundamental rights or freedoms may only be imposed if they are appropriate, proportionate and necessary within a democratic society, and their implementation shall be subject to adequate procedural safeguards in conformity with the European Convention for the Protection of Human Rights and Fundamental Freedoms and with general principles of Community law, including effective judicial protection and due process. Accordingly, these measures may only be taken with due respect for the principle of the presumption of innocence and the right to privacy. A prior, fair and impartial procedure shall be guaranteed, including the right to be heard of the person or persons concerned, subject to the need for appropriate conditions and procedural arrangements in duly substantiated cases of urgency in conformity with the European Convention for the Protection of Human Rights and Fundamental Freedoms. The right to effective and timely judicial review shall be guaranteed."

Net neutrality

The Telecoms Package contained provisions that concerned net neutrality. These provisions related to transparency of information supplied by network operators and Internet Service Providers (ISPs) to their subscribers. They can be found in Article 20 and 21 of the Universal Services directive.

These two articles were subject to considerable lobbying by the telecoms network operators, who wanted to retain the flexibility to run the networks to suit their business requirements.

Some of their demands were criticised by citizens advocacy groups, who argued that certain proposed amendments would allow the broadband operators to use discriminatory forms of traffic management. The outcome was a strange wording in the text:

"inform subscribers of any change to conditions limiting access to and/or use of services and applications, where such conditions are permitted under national law in accordance with Community law";

The Telecoms Package was a target for lobbying by American telecoms companies, notably AT&T and Verizon, seeking to get the ability to use sophisticated traffic management techniques on broadband networks embedded into European law. Filip Svab, chairman of the "Telecoms Working Group" of the Council of the European Union, which was responsible for drafting the council's changes to the Telecoms Package on the second reading, left Brussels for a new job with AT&T (External Affairs Director).

Media democracy

From Wikipedia, the free encyclopedia

Media democracy is a democratic approach to media studies that advocates for the reform of mass media to strengthen public service broadcasting and develop participation in alternative media and citizen journalism in order to create a mass media system that informs and empowers all members of society and enhances democratic values.

Media democracy is both a theory and a social movement. It is against concentration in the ownership of media, and it champions diversity of voices and perspectives within the news system.

Definition

Media democracy focuses on the empowerment of individual citizens and on the promotion of democratic ideals through the spread of information. Additionally, the approach argues that the media system itself should be democratic in its own construction, shying away from private ownership or intense regulations. Media democracy entails that media should be used to promote democracy and that media itself should be democratic. For example, it views media ownership concentration as undemocratic and as being unable to promote democracy, and thus, as facet of media that must be examined critically. Both the concept and the social movements promoting it have grown in response to the increased corporate domination in mass media and perceived shrinking of the marketplace of ideas. It understands media as a tool with the power to reach a large audience with a central role in shaping culture.

The concept of a media democracy follows in response to the deregulation of broadcast markets and the concentration of mass media ownership. In the book Manufacturing Consent: The Political Economy of the Mass Media, authors Edward S. Herman and Noam Chomsky outline the propaganda model of media, which states that the private interests in control of media outlets shape news and information before it is disseminated to the public through the use of five information filters.

Media democracy gives people the right to participate in media. It extends the media's relationship to the public sphere, where the information gathered can be viewed and shared by the people. The relationship of media democracy and the public sphere extends to various types of media, such as social media and mainstream media, in order for people to communicate with one another through digital media and share the information they want to publish to the public.

The term also refers to a modern social movement evident in countries all over the world. It attempts to make mainstream media more accountable to the publics they serve and to create more democratic alternatives to current forms of mass media.

Media democracy advocates for:

  • Replacing the current corporate media model with one that operates democratically, rather than for profit;
  • Strengthening public service broadcasting;
  • Incorporating the use of alternative media into the larger discourse;
  • Increasing the role of citizen journalism;
  • Turning a passive audience into active participants;
  • Using the mass media to promote democratic ideals.

The competitive structure of the mass media landscape stands in opposition to democratic ideals since the competition of the marketplace affects how stories are framed and transmitted to the public. This can "hamper the ability of the democratic system to solve internal social problems as well as international conflicts in an optimal way."

Media democracy is grounded in creating a mass media system that favours a diversity of voices and opinions over ownership or consolidation, in an effort to eliminate bias in coverage. This, in turn, leads to the informed public debate necessary for a democratic state. The ability to comprehend and scrutinize the connection between press and democracy is important because media has the power to tell a society's stories and thereby influence thinking, beliefs and behaviour.

Media ownership concentration

Cultural studies have investigated changes in the increasing tendency of modern mass media in the field of politics to blur and confuse the boundaries between journalism, entertainment, public relations and advertising. A diverse range of information providers is necessary so that viewers, readers and listeners receive a broad spectrum of information from varying sources that is not tightly controlled, biased and filtered. Access to different sources of information prevents deliberate attempts at misinformation and allows the public to make their own judgments and form their own opinions. This is critical as individuals must be in a position to decide and act autonomously for there to be a functioning democracy.

The last several decades have seen an increased concentration of media ownership by large private entities. In the United States, these organizations are known as the Big Six. They include: General Electric, Walt Disney Co., News Corporation, Time Warner, Viacom, and CBS Corporation. A similar approach has been taken in Canada, where most media outlets are owned by national conglomerates. This has led to a reduction in the number of voices and opinions communicated to the public; to an increase in the commercialization of news and information; a reduction in investigative reporting; and an emphasis on infotainment and profitability over informative public discourse.

The concentration of media outlets has been encouraged by government deregulation and neoliberal trade policies. In the United States, the Telecommunications Act of 1996 removed most of the media ownership rules that were previously put in place. This led to a massive consolidation of the telecommunications industry. Over 4,000 radio stations were bought out, and minority ownership in TV stations dropped to its lowest point since 1990 when the federal government began tracking the data.

Another aspect of the concentration of media ownership is the nature of the political economy that follows. Some media theorists argue that corporate interest is put forth, and that only the small cluster of media outlets have the privilege of controlling the information that the population can access. Moreover, media democracy claims that corporate ownership and commercial pressures influence media content, sharply limiting the range of news, opinions, and entertainment citizens receive. Consequently, they call for a more equal distribution of economic, social, cultural, and information capital, which would lead to a more informed citizenry, as well as a more enlightened, representative political discourse.

To counter media ownership concentration, advocates of media democracy support media diversification. For instance, they prefer local news sources, for they allow for a greater variety in the ideas being spread thanks to being outside the corporate economy, since diversity is at the root of a fair democracy.

Internet media democracy

The World Wide Web, particularly Web 2.0, is seen as a powerful medium for facilitating the growth of a media democracy as it offers participants "a potential voice, a platform, and access to the means of production." The internet is being utilized as a medium for political activity and other pressing issues such as social, environmental, and economic problems. Moreover, the utilization of the internet has allowed online users to participate in political discourse freely and increase their democratic presence online and in person.Users share information such as voting polls, dates, locations, and statistics, or information about protests and news that is not yet covered by the media.

The Arab Spring in the Middle East and North Africa, media was used for democratic purposes and uprisings. Social media sites like Facebook, Twitter, and YouTube allowed citizens to connect quickly, exchange information, and organize protests against their governments. While social media is not solely credited with the success of these protests, the technologies played an important role in instilling change in Tunisia, Egypt, and Libya. These acts show that a population can be informed through alternative media channels and adjust its behaviour accordingly. Individuals who did not have the facility to access these social media platforms were still able to observe news through satellite channels and other people who were able to connect online. Crowdfunded websites are also linked to a heightened spread of media democracy.

The Romanian Election of 2014 serves as an example of internet media democracy. During the elections, many took to social media to voice their opinions and share pictures of themselves at polling centres all around the world. This 2014 election is remembered as the first time in which virtual ambition and the use of social media translated positively and directly onto polling numbers. Many Romanians were actively campaigning online through social media platforms, specifically, Facebook, "As more than 7 million Romanians have profiles on at least one social network and more than 70% of that one were active daily, the campaign was focused on the development of the civic participation through internet social networks."

Restriction in media

Restrictions in media may exist either directly or indirectly. Before internet usage of media, as well as social media, became prominent, ordinary citizens rarely had much control over media. Even as the usage of social media has increased, major corporations still maintain the primary control over media as they are acquiring more and more platforms that would be considered in public use today.

Media has been compared in the sense that it is the usage of media that determines how the content is considered, rather than the actual messages of the content. According to Alec Charles edited Media/Democracy, “It is not the press or television or the internet or even democracy itself that is good or bad. It is what we do with them that makes them so”.

The role government plays in media restrictions in media has been viewed with skepticism as well. The government involvement in media is possibly due to distrust between the government and media, as the government has criticized media before. Partial blame for distrust between the government and the public on both sides often goes to media as the public may feel as though there is false information though media and the government may feel as though media is giving the public false information.

These functions of media in the way that it exists is described in a review of Victor Pickard's book, America's Battle for Media Democracy: The Triumph of Corporate Libertarianism and the Future of Media Reform, where Josh Shepperd wrote, “If one approaches the historical question of media ownership from a public service model, the private emphasis of the system requires praise for its innovations and self-sustainability, but deserves deep interrogation for its largely uncontested claim that the system, as is, provides the best opportunity for social recognition”.

In his 2005 speech at the NASIG, Leif Utne states that media democracy is related to freedom of press because it contributes to the reciprocal exchange of desires and information between the press, the state, and the population.

Normative roles of media in democracy

Monitorial role

The term is originally coined by Harold Lasswell who associated the monitorial role with surveillance, "observing an extended environment for relevant information about events, conditions, trends, and threats." This form of media democracy is organized through the scanning of the real world of people, status and events, and potentially relevant sources of information. Under the guidance of relevance, importance, and normative framework that regulates the public domain, such information is evaluated and verified. Staying alert and controlling political power. It provides information to individuals to make their own decisions. The monitorial role involves practices such as publishing reports, agendas, and threats, reporting political, social, and economic decisions, and shedding light to public opinion.

Facilitative role

Media democracy uses journalism as a means to improve the quality of public life and promote democratic forms. It serves as a glue to hold community together. And it also enhances the ability and desire to listen to others.

Radical role

Going to the "root" of power relations and inequality and exposing their negative impacts upon the quality of everyday life and the health of democracy.

Oppositional to commercial/mainstream media which tend to protect the interest of the powerful and fail to provide information that raises critical awareness and generated empowerment. Cultivating political advocacy motivates engaging in political social democracy.

Collaborative role

Collaboration between media and state is always open and transparent.

Actual roles of media in democracy

There is widespread concern that the mass media and social media are not serving the role that a well-functioning democracy. For example, the media is in charge of broadcasting political news to truthfully inform the population, but those messages can have a double purpose (promoting the good of the population and advancing the politicians' career through public relations), which can make the journalists' work more difficult. There is a need for the public to be informed about certain issues whether it may be social, political, or environmental. The media is heavily credited for keeping the public informed and up-to-date with activity, however, throughout the 80's and 90's corporate media has taken over in providing this information.

Feminism

Feminist media theories argue that the media cannot be considered truly inclusive or democratic if it continues relying on masculine concepts of impartiality and objectivity. They argue that creating a more inclusive and democratic media requires reconceptualizing how we define the news and its principles. According to some feminist media theorists, news is like fictional genres that impose order and interpretation on its materials by means of narrative. Consequently, the news narrative put forward presents only one angle of a much wider picture.

It is argued that the distinction between public and private information that underpins how we define valuable or appropriate news content is also a gendered concept. The feminist argument follows that the systematic subversion of private or subjective information excludes women's voices from the popular discourse. Further to this point, feminist media theorists argue there is an assumed sense of equality implicit in the definition of the public that ignores important differences between genders in terms of their perspectives. So while media democracy in practice as alternative or citizen journalism may allow for greater diversity, these theorists argue that women's voices are framed within a masculine structure of objectivity and rationalist thinking.

Despite this criticism, there is an acceptance among some theorists that the blurring of public and private information with the introduction of some new alternative forms of media production (as well as the increase in opportunities for interaction and user-generated content) may signal a positive shift towards a more democratic and inclusive media democracy. Some forms of media democracy in practice (as citizen or alternative journalism) are challenging journalism's central tenets (objectivity and impartiality) by rejecting the idea that it is possible to tell a narrative without bias and, more to the point, that it is socially or morally preferable.

Activism

Media Democracy Day, OpenMedia, and NewsWatch Canada are all Canadian initiatives that strive for reforms in the media. They aim to give an equal voice to all interests. Others, such as the creators of the Indonesian television program Newsdotcom, focus on increasing the population's media literacy rate to make the people more critical of the news they consume.

Criticism

The media has given political parties the tools to reach large numbers of people and inform them on key issues ranging from policies to elections. The media can be seen as an enabler for democracy; having better-educated voters would lead to a more legitimate government. However, critics such as Julian King have argued that malicious actors can easily hijack those same tools - both state and non-state - and use them as weapons against people. In the past few years, media has become a direct threat to democracy. Two organizations of the Omidyar Group, Democracy Fund and Omidyar Network assembled to establish the relationship between media and democracy. Their initial findings presented six ways that social media was a direct threat to democracy.

Many social media platforms, such as Facebook, utilize surveillance infrastructure to collect user data and micro-target populations with personalized advertisements. With users leaving digital footprints almost everywhere they go, social media platforms create portfolios of the user to target them with specific advertisements. This leads to the formation of "echo chambers, polarization and hyper-partisanship." Therefore, social media platforms create bubbles, which are forever growing, of one-sided information and opinions. These bubbles trap the users and diminish opportunities for a healthy discourse. A commonly known effect social media has on democracy is the "spread of false and/or misleading information". Disinformation and Misinformation is commonly, at scale, spread across social media by both state and private actors, mainly using bots. Each type poses a threat as it floods social media with multiple, competing realities shifting the truth, facts and evidence to the side. Social media follows an algorithm that converts popularity into legitimacy, this is the idea that likes or retweets create validity or mass support. In theory, it creates a distorted system of evaluating information and provides a false representation. It's further harder to distinguish who is a troll or a bot. Social media further allows for manipulation by "populist leaders, governments and fringe actors". "Populist" leaders use platforms such as Twitter, Instagram to communicate with their electorate. However, such platforms allow them to roam freely with no restrictions allowing them to silence the minority voice, showcase momentum for their views or creating the impression of approval. Finally, social media causes the disruption of the public square. Some social media platforms have user policies and technical features that enable unintended consequences, such as hate speech, terrorist appeals, sexual and racial harassment, thus discouraging any civil debates. This leads the targeted groups to opting out of participating in public discourse. as much as social media has made it easier for the public to receive and access news and entertainment from their devices it has been dangerous in terms of rapid spread of fake news (2019). the public is now easily accessible to those with the intend to spread disinformation information in order to harm and misled the public. those in authority, officials and the elite use their power to dominate the narratives on social media often times to gain their support and misled them.

774–775 carbon-14 spike

The 774–775 carbon-14 spike is an observed increase of around 1.2% in the concentration of the radioactive carbon-14 isotope in tree rings dated to 774 or 775 CE, which is about 20 times higher than the normal year-to-year variation of radiocarbon in the atmosphere. It was discovered during a study of Japanese cedar tree-rings, with the year of occurrence determined through dendrochronology. A surge in beryllium isotope 10
Be
, detected in Antarctic ice cores, has also been associated with the 774–775 event. The 774–775 CE carbon-14 spike is one of several Miyake events and it produced the largest and most rapid rise in carbon-14 ever recorded.

The event appears to have been global, with the same carbon-14 signal found in tree rings from Germany, Russia, the United States, Finland, and New Zealand.

The carbon-14 spike around 774. Colored dots are measurements in Japanese (M12) and German (oak) trees; black lines are the modeled profile corresponding to the instant production of carbon-14.

The signal exhibits a sharp increase of around 1.2% followed by a slow decline, which is consistent with an instant production of carbon-14 in the atmosphere, indicating that the event was short in duration. The globally averaged production of carbon-14 for this event is (1.3 ± 0.2) × 108 atoms/cm2.

Hypotheses

Several possible causes of the event have been considered.

The Anglo-Saxon Chronicle recorded "a red crucifix, after sunset", which has been variously hypothesised to have been a supernova or the aurora borealis.

Annus Domini (the year of the Lord) 774. This year the Northumbrians banished their king, Alred, from York at Easter-tide; and chose Ethelred, the son of Mull, for their lord, who reigned four winters. This year also appeared in the heavens a red crucifix, after sunset; the Mercians and the men of Kent fought at Otford; and wonderful serpents were seen in the land of the South-Saxons.

In China, there is only one clear reference to an aurora in the mid-770s, on 12 January 776. However, an anomalous "thunderstorm" was recorded for 775.

As established by Ilya G. Usoskin and colleagues, the current scientific paradigm is that the event was caused by a solar particle event (SPE) from a very strong solar flare, perhaps the strongest known. Another proposed origin, involving a gamma-ray burst, is regarded as unlikely, because the event was also observed in isotopes 10
Be
and 36
Cl
.

Frequency of similar events

The AD 774/75 event in view of 10
Be
, 14
C
, and 36
Cl

The event of 774 is the strongest spike over the last 11,000 years in the record of cosmogenic isotopes, but several other events of the same kind (Miyake events) have occurred during the Holocene epoch. The 993–994 carbon-14 spike was about 60% as strong; another event occurred in c. 660 BCE. In 2023 the strongest event yet discovered was reported, which occurred in 12,350-12,349 BC.

The event of 774 did not have any significant consequences for life on Earth, but had it happened in modern times, it might have produced catastrophic damage to modern technology, particularly to communication and space-borne navigation systems. In addition, a solar flare capable of producing the observed isotopic effect would pose considerable risk to astronauts.

14
C
variations are poorly understood, because annual-resolution measurements are available for only a few periods (such as 774–775). In a 2017 study, a 14
C
increase of (2.0%) was associated with a 5480 BCE event, but it is not associated with a solar event because of its long duration, but rather to an unusually fast grand minimum of solar activity.

Solvated electron

From Wikipedia, the free encyclopedia

A solvated electron is a free electron in a solution, in which it behaves like an anion. An electron's being solvated in a solution means it is bound by the solution. The notation for a solvated electron in formulas of chemical reactions is "e". Often, discussions of solvated electrons focus on their solutions in ammonia, which are stable for days, but solvated electrons also occur in water and many other solvents – in fact, in any solvent that mediates outer-sphere electron transfer. The solvated electron is responsible for a great deal of radiation chemistry.

Ammonia solutions

Liquid ammonia will dissolve all of the alkali metals and other electropositive metals such as Ca, Sr, Ba, Eu, and Yb (also Mg using an electrolytic process), giving characteristic blue solutions. For alkali metals in liquid ammonia, the solution is blue when dilute and copper-colored when more concentrated (> 3 molar). These solutions conduct electricity. The blue colour of the solution is due to ammoniated electrons, which absorb energy in the visible region of light. The diffusivity of the solvated electron in liquid ammonia can be determined using potential-step chronoamperometry.

Solvated electrons in ammonia are the anions of salts called electrides.

Na + 6 NH3 → [Na(NH3)6]+ + e

The reaction is reversible: evaporation of the ammonia solution produces a film of metallic sodium.

Case study: Li in NH3

Photos of two solutions in round-bottom flasks surrounded by dry ice; one solution is dark blue, the other golden.
Solutions obtained by dissolution of lithium in liquid ammonia. The solution at the top has a dark blue color and the lower one a golden color. The colors are characteristic of solvated electrons at electronically insulating and metallic concentrations, respectively.

A lithium–ammonia solution at −60 °C is saturated at about 15 mol% metal (MPM). When the concentration is increased in this range electrical conductivity increases from 10−2 to 104 Ω−1cm−1 (larger than liquid mercury). At around 8 MPM, a "transition to the metallic state" (TMS) takes place (also called a "metal-to-nonmetal transition" (MNMT)). At 4 MPM a liquid-liquid phase separation takes place: the less dense gold-colored phase becomes immiscible from a denser blue phase. Above 8 MPM the solution is bronze/gold-colored. In the same concentration range the overall density decreases by 30%.

Other solvents

Alkali metals also dissolve in some small primary amines, such as methylamine and ethylamine and hexamethylphosphoramide, forming blue solutions. THF dissolves alkali metal, but a Birch reduction (see § Applications) analogue does not proceed without a diamine ligand. Solvated electron solutions of the alkaline earth metals magnesium, calcium, strontium and barium in ethylenediamine have been used to intercalate graphite with these metals.

Water

Solvated electrons are involved in the reaction of alkali metals with water, even though the solvated electron has only a fleeting existence. Below pH = 9.6 the hydrated electron reacts with the hydronium ion giving atomic hydrogen, which in turn can react with the hydrated electron giving hydroxide ion and usual molecular hydrogen H2.

Solvated electrons can be found even in the gas phase. This implies their possible existence in the upper atmosphere of Earth and involvement in nucleation and aerosol formation.

Its standard electrode potential value is -2.77 V. The equivalent conductivity of 177 Mho cm2 is similar to that of hydroxide ion. This value of equivalent conductivity corresponds to a diffusivity of 4.75 cm2s−1.

Reactivity

Although quite stable, the blue ammonia solutions containing solvated electrons degrade rapidly in the presence of catalysts to give colorless solutions of sodium amide:

2 [Na(NH3)6]+e → H2 + 2 NaNH2 + 10 NH3

Electride salts can be isolated by the addition of macrocyclic ligands such as crown ether and cryptands to solutions containing solvated electrons. These ligands strongly bind the cations and prevent their re-reduction by the electron.

[Na(NH3)6]+e + cryptand → [Na(cryptand)]+e+ 6 NH3

The solvated electron reacts with oxygen to form a superoxide radical (O2.−). With nitrous oxide, solvated electrons react to form hydroxyl radicals (HO.).

Applications

Solvated electrons are involved in electrode processes, a broad area with many technical applications (electrosynthesis, electroplating, electrowinning).

A specialized use of sodium-ammonia solutions is the Birch reduction. Other reactions where sodium is used as a reducing agent also are assumed to involve solvated electrons, e.g. the use of sodium in ethanol as in the Bouveault–Blanc reduction.

Work by Cullen et al. showed that metal-ammonia solutions can be used to intercalate a range of layered materials, which can then be exfoliated in polar, aprotic solvents, to produce ionic solutions of two-dimensional materials. An example of this is the intercalation of graphite with potassium and ammonia, which is then exfoliated by spontaneous dissolution in THF to produce a graphenide solution. 

History

The observation of the color of metal-electride solutions is generally attributed to Humphry Davy. In 1807–1809, he examined the addition of grains of potassium to gaseous ammonia (liquefaction of ammonia was invented in 1823). James Ballantyne Hannay and J. Hogarth repeated the experiments with sodium in 1879–1880. W. Weyl in 1864 and C. A. Seely in 1871 used liquid ammonia, whereas Hamilton Cady in 1897 related the ionizing properties of ammonia to that of water. Charles A. Kraus measured the electrical conductance of metal ammonia solutions and in 1907 attributed it to the electrons liberated from the metal. In 1918, G. E. Gibson and W. L. Argo introduced the solvated electron concept. They noted based on absorption spectra that different metals and different solvents (methylamine, ethylamine) produce the same blue color, attributed to a common species, the solvated electron. In the 1970s, solid salts containing electrons as the anion were characterized.

Carrington Event

From Wikipedia, the free encyclopedia
Carrington Event
A black and white sketch of a large cluster of sunspots on the surface of the Sun.
Sunspots of 1 September 1859, as sketched by Richard Carrington. A and B mark the initial positions of an intensely bright event, which moved over the course of five minutes to C and D before disappearing.
Coronal mass ejection
Travel time17.6 hr (est.)

Geomagnetic storm
Initial onset1 September 1859
Dissipated2 September 1859
ImpactsSevere damage to telegraph stations

The Carrington Event was the most intense geomagnetic storm in recorded history, peaking on 1–2 September 1859 during solar cycle 10. It created strong auroral displays that were reported globally and caused sparking and even fires in telegraph stations. The geomagnetic storm was most likely the result of a coronal mass ejection (CME) from the Sun colliding with Earth's magnetosphere.

The geomagnetic storm was associated with a very bright solar flare on 1 September 1859. It was observed and recorded independently by British astronomers Richard Carrington and Richard Hodgson—the first records of a solar flare. A geomagnetic storm of this magnitude occurring today has the potential to cause widespread electrical disruptions, blackouts and damage due to extended cuts of the electrical power grid.

History

Geomagnetic storm

Image of the July 2012 solar storm, which generated CMEs of comparable strength to the one of 1859. Note the small bright circle in the light baffle which demonstrates the size of the Sun.

On 1 and 2 September 1859, one of the largest geomagnetic storms (as recorded by ground-based magnetometers) occurred. Estimates of the storm strength (Dst) range from −0.80 to −1.75 μT.

The geomagnetic storm is thought to have been caused by a big coronal mass ejection (CME) that traveled directly toward Earth, taking 17.6 hours to make the 150×106 km (93×106 mi) journey. Typical CMEs take several days to arrive at Earth, but it is believed that the relatively high speed of this CME was made possible by a prior CME, perhaps the cause of the large aurora event on 29 August that "cleared the way" of ambient solar wind plasma for the Carrington Event.

Associated solar flare

Just before noon on 1 September 1859, the English amateur astronomers Richard Carrington and Richard Hodgson independently recorded the earliest observations of a solar flare. Carrington and Hodgson compiled independent reports which were published side by side in Monthly Notices of the Royal Astronomical Society and exhibited their drawings of the event at the November 1859 meeting of the Royal Astronomical Society.

Because of a geomagnetic solar flare effect (a "magnetic crochet") observed in the Kew Observatory magnetometer record by Scottish physicist Balfour Stewart, and a geomagnetic storm observed the following day, Carrington suspected a solar–terrestrial connection. Worldwide reports of the effects of the geomagnetic storm of 1859 were compiled and published by American mathematician Elias Loomis, which support the observations of Carrington and Stewart.

Impact

Auroras

Aurora during a geomagnetic storm that was most likely caused by a coronal mass ejection from the Sun on 24 May 2010, taken from the International Space Station

Auroras were seen around the world in the northern and southern hemispheres. The aurora borealis over the Rocky Mountains in the United States was so bright that the glow woke gold miners, who were reported to have begun to prepare breakfast because they thought it was morning. It was also reported that people in the north-eastern United States could read a newspaper by the aurora's light. The aurora was also visible from the poles to low latitude areas such as south-central Mexico, Cuba, Hawaii, Queensland, southern Japan and China, and even at lower latitudes very close to the equator, such as in Colombia.

On Saturday 3 September 1859, the Baltimore American and Commercial Advertiser reported that

Those who happened to be out late on Thursday night had an opportunity of witnessing another magnificent display of the auroral lights. The phenomenon was very similar to the display on Sunday night, though at times the light was, if possible, more brilliant, and the prismatic hues more varied and gorgeous. The light appeared to cover the whole firmament, apparently like a luminous cloud, through which the stars of the larger magnitude indistinctly shone. The light was greater than that of the moon at its full, but had an indescribable softness and delicacy that seemed to envelop everything upon which it rested. Between 12 and 1 o'clock, when the display was at its full brilliancy, the quiet streets of the city resting under this strange light, presented a beautiful as well as singular appearance.

In 1909, an Australian gold miner named C. F. Herbert retold his observations in a letter to the Daily News in Perth,

I was gold-digging at Rokewood, about four miles [6 km] from Rokewood township (Victoria). Myself and two mates looking out of the tent saw a great reflection in the southern heavens at about 7 o'clock p.m., and in about half an hour, a scene of almost unspeakable beauty presented itself: Lights of every imaginable color were issuing from the southern heavens, one color fading away only to give place to another if possible more beautiful than the last, the streams mounting to the zenith, but always becoming a rich purple when reaching there, and always curling round, leaving a clear strip of sky, which may be described as four fingers held at arm's length. The northern side from the zenith was also illuminated with beautiful colors, always curling round at the zenith, but were considered to be merely a reproduction of the southern display, as all colors south and north always corresponded. It was a sight never to be forgotten, and was considered at the time to be the greatest aurora recorded [...]. The rationalist and pantheist saw nature in her most exquisite robes, recognising, the divine immanence, immutable law, cause, and effect. The superstitious and the fanatical had dire forebodings, and thought it a foreshadowing of Armageddon and final dissolution.

Telegraphs

Because of the geomagnetically induced current from the electromagnetic field, telegraph systems all over Europe and North America failed, in some cases giving their operators electric shocks. Telegraph pylons threw sparks. Some operators were able to continue to send and receive messages despite having disconnected their power supplies. The following conversation occurred between two operators of the American telegraph line between Boston, Massachusetts, and Portland, Maine, on the night of 2 September 1859 and reported in the Boston Evening Traveler:

Boston operator (to Portland operator): "Please cut off your battery [power source] entirely for fifteen minutes."

Portland operator: "Will do so. It is now disconnected."

Boston: "Mine is disconnected, and we are working with the auroral current. How do you receive my writing?"

Portland: "Better than with our batteries on. – Current comes and goes gradually."

Boston: "My current is very strong at times, and we can work better without the batteries, as the aurora seems to neutralize and augment our batteries alternately, making current too strong at times for our relay magnets. Suppose we work without batteries while we are affected by this trouble."

Portland: "Very well. Shall I go ahead with business?"

Boston: "Yes. Go ahead."

The conversation was carried on for around two hours using no battery power at all and working solely with the current induced by the aurora, the first time on record that more than a word or two was transmitted in such manner.

Similar events

Another strong solar storm occurred in February 1872. Less severe storms also occurred in 1921 (this was comparable by some measures), 1938, 1941, 1958, 1959 and 1960, when widespread radio disruption was reported. The flares and CMEs of the August 1972 solar storms were similar to the Carrington event in size and magnitude; however, unlike the 1859 storms, they did not cause an extreme geomagnetic storm. The March 1989 geomagnetic storm knocked out power across large sections of Quebec, while the 2003 Halloween solar storms registered the most powerful solar explosions ever recorded. On 23 July 2012, a "Carrington-class" solar superstorm (solar flare, CME, solar electromagnetic pulse) was observed, but its trajectory narrowly missed Earth. During the May 2024 solar storms, the Aurora Borealis was sighted as far south as Puerto Rico.

In June 2013, a joint venture from researchers at Lloyd's of London and Atmospheric and Environmental Research (AER) in the US used data from the Carrington Event to estimate the cost of a similar event in the present to the US alone at US$600 billion to $2.6 trillion (equivalent to $774 billion to $3.35 trillion in 2023), which, at the time, equated to roughly 3.6 to 15.5 percent of annual GDP.

Other research has looked for signatures of large solar flares and CMEs in carbon-14 in tree rings and beryllium-10 (among other isotopes) in ice cores. The signature of a large solar storm has been found for the years 774–775 and 993–994. Carbon-14 levels stored in 775 suggest an event about 20 times the normal variation of the Sun's activity, and 10 or more times the size of the Carrington Event. An event in 7176 BCE may have exceeded even the 774–775 event based on this proxy data.

Whether the physics of solar flares is similar to that of even larger superflares is still unclear. The Sun may differ in important ways such as size and speed of rotation from the types of stars that are known to produce superflares.

Other evidence

Ice cores containing thin nitrate-rich layers have been analysed to reconstruct a history of past solar storms predating reliable observations. This was based on the hypothesis that solar energetic particles would ionize nitrogen, leading to the production of nitric oxide and other oxidised nitrogen compounds, which would not be too diluted in the atmosphere before being deposited along with snow.

Beginning in 1986, some researchers claimed that data from Greenland ice cores showed evidence of individual solar particle events, including the Carrington Event. More recent ice core work, however, casts significant doubt on this interpretation and shows that nitrate spikes are likely not a result of solar energetic particle events but can be due to terrestrial events such as forest fires, and correlate with other chemical signatures of known forest fire plumes. Nitrate events in cores from Greenland and Antarctica do not align, so the hypothesis that they reflect proton events is now in significant doubt.

A 2024 study analysed digitized magnetogram readings from magnetic observatories at Kew and Greenwich. "Initial analysis suggests the rates of change of the field of over 700 nT/min exceeded the 1-in-100 years extreme value of 350–400 nT/min at this latitude based on digital-era records", indicating a far greater change rate than modern digital measurements.

Pre-Marxist communism

From Wikipedia, the free encyclopedia Chiefs of the Six Nations of the Hauden...