Search This Blog

Thursday, August 31, 2023

Nuclear weapon yield

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Nuclear_weapon_yield
Log–log plot comparing the yield (in kilotonnes) and mass (in kilograms) of various nuclear weapons developed by the United States.

The explosive yield of a nuclear weapon is the amount of energy released such as blast, thermal, and nuclear radiation, when that particular nuclear weapon is detonated, usually expressed as a TNT equivalent (the standardized equivalent mass of trinitrotoluene which, if detonated, would produce the same energy discharge), either in kilotonnes (kt—thousands of tonnes of TNT), in megatonnes (Mt—millions of tonnes of TNT), or sometimes in terajoules (TJ). An explosive yield of one terajoule is equal to 0.239 kilotonnes of TNT. Because the accuracy of any measurement of the energy released by TNT has always been problematic, the conventional definition is that one kilotonne of TNT is held simply to be equivalent to 1012 calories.

The yield-to-weight ratio is the amount of weapon yield compared to the mass of the weapon. The practical maximum yield-to-weight ratio for fusion weapons (thermonuclear weapons) has been estimated to six megatonnes of TNT per tonne of bomb mass (25 TJ/kg). Yields of 5.2 megatonnes/tonne and higher have been reported for large weapons constructed for single-warhead use in the early 1960s. Since then, the smaller warheads needed to achieve the increased net damage efficiency (bomb damage/bomb mass) of multiple warhead systems have resulted in increases in the yield/mass ratio for single modern warheads.

Examples of nuclear weapon yields

In order of increasing yield (most yield figures are approximate):

Bomb Yield Notes Weight of nuclear material
kt TNT TJ
Davy Crockett 0.02 0.084 Variable yield tactical nuclear weapon—mass only 23 kg (51 lb), lightest ever deployed by the United States (same warhead as Special Atomic Demolition Munition and GAR-11 Nuclear Falcon missile).
AIR-2 Genie 1.5 6.3 An unguided air-to-air rocket armed with a W25 nuclear warhead developed to intercept bomber squadrons. Total weight of nuclear material and bomb was 98.8 - 100.2 kg
Hiroshima's "Little Boy" gravity bomb 13–18 54–75 Gun type uranium-235 fission bomb (the first of the two nuclear weapons that have been used in warfare). 64 kg of Uranium-235, about 1.38% of the uranium fissioned
Nagasaki's "Fat Man" gravity bomb 19–23 79–96 Implosion type plutonium-239 fission bomb (the second of the two nuclear weapons used in warfare). 6.2 kg of Plutonium-239, about 1 kg fissioned
W76 warhead 100 420 Twelve of these may be in a MIRVed Trident II missile; treaty limited to eight.
W87 warhead 300 1,300 Ten of these were in a MIRVed LGM-118A Peacekeeper.
W88 warhead 475 1,990 Twelve of these may be in a Trident II missile; treaty limited to eight.
Ivy King device 500 2,100 Most powerful US pure fission bomb, 60 kg uranium, implosion type. Never deployed. 60 kg of Highly enriched uranium (HEU)
Orange Herald Small 800 3,300 Most powerful tested UK boosted fission missile warhead. 117 kg of Uranium-235
B83 nuclear bomb 1,200 5,000 Variable yield weapon, most powerful US weapon in active service.
B53 nuclear bomb 9,000 38,000 Was the most powerful US bomb in active service until 1997. 50 were retained as part of the "Hedge" portion of the Enduring Stockpile until completely dismantled in 2011. The Mod 11 variant of the B61 replaced the B53 in the bunker busting role. The W53 warhead from the weapon was used on the Titan II Missile until the system was decommissioned in 1987.
Castle Bravo device 15,000 63,000 Most powerful US test. Never deployed. 400 kg of Lithium-6 deuteride
EC17/Mk-17, the EC24/Mk-24, and the B41 (Mk-41) 25,000 100,000 Most powerful US weapons ever: 25 megatonnes of TNT (100 PJ); the Mk-17 was also the largest by area square footage and mass cubic footage: about 20 short tons (18,000 kg). The Mk-41 or B41 had a mass of 4800 kg and yield of 25 Mt; this equates to being the highest yield-to-weight weapon ever produced. All were gravity bombs carried by the B-36 bomber (retired by 1957).
The entire Operation Castle nuclear test series 48,200 202,000 The highest-yielding test series conducted by the US.
Tsar Bomba device 50,000 210,000 USSR, most powerful nuclear weapon ever detonated, yield of 50 megatonnes, (50 million tonnes of TNT). In its "final" form (i.e. with a depleted uranium tamper instead of one made of lead) it would have been 100 megatonnes.
All nuclear testing as of 1996 510,300 2,135,000 Total energy expended during all nuclear testing.
Total energy produced by the sun in its stellar lifetime 1.353*10^32 5.66*10^32
Comparative fireball radii for a selection of nuclear weapons. Contrary to the image, which may depict the initial fireball radius, the maximum average fireball radius of Castle Bravo, a 15-megatonne yield surface burst, is 3.3 to 3.7 km (2.1 to 2.3 mi), and not the 1.42 km displayed in the image. Similarly the maximum average fireball radius of a 21-kilotonne low altitude airburst, which is the modern estimate for the Fat Man, is .21 to .24 km (0.13 to 0.15 mi), and not the 0.1 km of the image.

In comparison, the blast yield of the GBU-43 Massive Ordnance Air Blast bomb is 0.011 kt, and that of the Oklahoma City bombing, using a truck-based fertilizer bomb, was 0.002 kt. The estimated strength of the explosion at the Port of Beirut is 0.3-0.5 kt. Most artificial non-nuclear explosions are considerably smaller than even what are considered to be very small nuclear weapons.

Yield limits

The yield-to-mass ratio is the amount of weapon yield compared to the mass of the weapon. According to nuclear-weapons designer Ted Taylor, the practical maximum yield-to-mass ratio for fusion weapons is about 6 megatonnes of TNT per tonne (25 TJ/kg). The "Taylor limit" is not derived from first principles, and weapons with yields as high as 9.5 megatonnes per tonne have been theorized. The highest achieved values are somewhat lower, and the value tends to be lower for smaller, lighter weapons, of the sort that are emphasized in today's arsenals, designed for efficient MIRV use or delivery by cruise missile systems.

  • The 25 Mt yield option reported for the B41 would give it a yield-to-mass ratio of 5.1 megatonnes of TNT per tonne. While this would require a far greater efficiency than any other current U.S. weapon (at least 40% efficiency in a fusion fuel of lithium deuteride), this was apparently attainable, probably by the use of higher than normal lithium-6 enrichment in the lithium deuteride fusion fuel. This results in the B41 still retaining the record for the highest yield-to-mass weapon ever designed.
  • The W56 demonstrated a yield-to-mass ratio of 4.96 kt per kilogram of device mass, and very close to the predicted 5.1 kt/kg achievable in the highest yield-to-mass weapon ever built, the 25-megatonne B41. Unlike the B41, which was never proof-tested at full yield, the W56 demonstrated its efficiency in the XW-56X2 Bluestone shot of Operation Dominic in 1962, thus, from information available in the public domain, the W56 may hold the distinction of demonstrating the highest efficiency in a nuclear weapon to date.
  • In 1963 DOE declassified statements that the U.S. had the technological capability of deploying a 35 Mt warhead on the Titan II, or a 50–60 Mt gravity bomb on B-52s. Neither weapon was pursued, but either would require yield-to-mass ratios superior to a 25 Mt Mk-41. This may have been achievable by utilizing the same design as the B41 but with the addition of a HEU tamper, in place of the cheaper but lower energy density U-238 tamper, which is the most commonly used tamper material in Teller–Ulam thermonuclear weapons.
  • For current smaller US weapons, yield is 600 to 2200 kilotonnes of TNT per tonne. By comparison, for the very small tactical devices such as the Davy Crockett it was 0.4 to 40 kilotonnes of TNT per tonne. For historical comparison, for Little Boy the yield was only 4 kilotonnes of TNT per tonne, and for the largest Tsar Bomba, the yield was 2 megatonnes of TNT per tonne (deliberately reduced from about twice as much yield for the same weapon, so there is little doubt that this bomb as designed was capable of 4 megatonnes per tonne yield).
  • The largest pure-fission bomb ever constructed, Ivy King, had a 500 kilotonne yield, which is probably in the range of the upper limit on such designs. Fusion boosting could likely raise the efficiency of such a weapon significantly, but eventually all fission-based weapons have an upper yield limit due to the difficulties of dealing with large critical masses. (The UK's Orange Herald was a very large boosted fission bomb, with a yield of 800 kilotonnes.) However, there is no known upper yield limit for a fusion bomb.

Large single warheads are seldom a part of today's arsenals, since smaller MIRV warheads, spread out over a pancake-shaped destructive area, are far more destructive for a given total yield, or unit of payload mass. This effect results from the fact that destructive power of a single warhead on land scales approximately only as the cube root of its yield, due to blast "wasted" over a roughly hemispherical blast volume, while the strategic target is distributed over a circular land area with limited height and depth. This effect more than makes up for the lessened yield/mass efficiency encountered if ballistic missile warheads are individually scaled down from the maximal size that could be carried by a single-warhead missile.

Yield Efficiency

The efficiency of an atomic bomb is the ratio of the actual yield to the theoretical maximum yield of the atomic bomb. Not all atomic bombs possess the same yield efficiency as each individual bombs design plays a large role in how efficient it can be. In order to maximize yield efficiency one must make sure to assemble the critical mass correctly, as well as implementing instruments such as tampers or initiators in the design. A tamper is typically made of uranium and it holds the core together using its inertia. It is used to prevent the core from separating too soon to generate maximum fission, so as not to cause a "fizzle". The initiator is a source of neutrons either inside of the core, or on the outside of the bomb, and in this case it shoots neutrons at the core at the moment of detonation. It is essentially kick starting the reaction so the maximum fission reactions can occur to maximize yield. 

Milestone nuclear explosions

The following list is of milestone nuclear explosions. In addition to the atomic bombings of Hiroshima and Nagasaki, the first nuclear test of a given weapon type for a country is included, as well as tests that were otherwise notable (such as the largest test ever). All yields (explosive power) are given in their estimated energy equivalents in kilotons of TNT (see TNT equivalent). Putative tests (like Vela incident) have not been included.

Date Name
Yield (kt)
Country Significance
July 16, 1945 Trinity 18–20 United States First fission-device test, first plutonium implosion detonation.
August 6, 1945 Little Boy 12–18 United States Bombing of Hiroshima, Japan, first detonation of a uranium gun-type device, first use of a nuclear device in combat.
August 9, 1945 Fat Man 18–23 United States Bombing of Nagasaki, Japan, second detonation of a plutonium implosion device (the first being the Trinity Test), second and last use of a nuclear device in combat.
August 29, 1949 RDS-1 22 Soviet Union First fission-weapon test by the Soviet Union.
May 8, 1951 George 225 United States First boosted nuclear weapon test, first weapon test to employ fusion in any measure.
October 3, 1952 Hurricane 25 United Kingdom First fission weapon test by the United Kingdom.
November 1, 1952 Ivy Mike 10,400 United States First "staged" thermonuclear weapon, with cryogenic fusion fuel, primarily a test device and not weaponized.
November 16, 1952 Ivy King 500 United States Largest pure-fission weapon ever tested.
August 12, 1953 RDS-6s 400 Soviet Union First fusion-weapon test by the Soviet Union (not "staged").
March 1, 1954 Castle Bravo 15,000 United States First "staged" thermonuclear weapon using dry fusion fuel. A serious nuclear fallout accident occurred. Largest nuclear detonation conducted by United States.
November 22, 1955 RDS-37 1,600 Soviet Union First "staged" thermonuclear weapon test by the Soviet Union (deployable).
May 31, 1957 Orange Herald 720 United Kingdom Largest boosted fission weapon ever tested. Intended as a fallback "in megaton range" in case British thermonuclear development failed.
November 8, 1957 Grapple X 1,800 United Kingdom First (successful) "staged" thermonuclear weapon test by the United Kingdom
February 13, 1960 Gerboise Bleue 70 France First fission weapon test by France.
October 31, 1961 Tsar Bomba 50,000 Soviet Union Largest thermonuclear weapon ever tested—scaled down from its initial 100 Mt design by 50%.
October 16, 1964 596 22 China First fission-weapon test by the People's Republic of China.
June 17, 1967 Test No. 6 3,300 China First "staged" thermonuclear weapon test by the People's Republic of China.
August 24, 1968 Canopus 2,600 France First "staged" thermonuclear weapon test by France
May 18, 1974 Smiling Buddha 12 India First fission nuclear explosive test by India.
May 11, 1998 Pokhran-II 45–50 India First potential fusion-boosted weapon test by India; first deployable fission weapon test by India.
May 28, 1998 Chagai-I 40 Pakistan First fission weapon (boosted) test by Pakistan[15]
October 9, 2006 2006 nuclear test under 1 North Korea First fission-weapon test by North Korea (plutonium-based).
September 3, 2017 2017 nuclear test 200–300 North Korea First "staged" thermonuclear weapon test claimed by North Korea.
Note

Calculating yields and controversy

Yields of nuclear explosions can be very hard to calculate, even using numbers as rough as in the kilotonne or megatonne range (much less down to the resolution of individual terajoules). Even under very controlled conditions, precise yields can be very hard to determine, and for less controlled conditions the margins of error can be quite large. For fission devices, the most precise yield value is found from "radiochemical/Fallout analysis"; that is, measuring the quantity of fission products generated, in much the same way as the chemical yield in chemical reaction products can be measured after a chemical reaction. The radiochemical analysis method was pioneered by Herbert L. Anderson.

For nuclear explosive devices where the fallout is not attainable or would be misleading, neutron activation analysis is often employed as the second most accurate method, with it having been used to determine the yield of both Little Boy and thermonuclear Ivy Mike's respective yields.

Yields can also be inferred in a number of other remote sensing ways, including scaling law calculations based on blast size, infrasound, fireball brightness (Bhangmeter), seismographic data (CTBTO), and the strength of the shock wave.

Enrico Fermi famously made a (very) rough calculation of the yield of the Trinity test by dropping small pieces of paper in the air and measuring how far they were moved by the blast wave of the explosion; that is, he found the blast pressure at his distance from the detonation in pounds per square inch, using the deviation of the papers' fall away from the vertical as a crude blast gauge/barograph, and then with pressure X in psi, at distance Y, in miles figures, he extrapolated backwards to estimate the yield of the Trinity device, which he found was about 10 kilotonnes of blast energy.

Fermi later recalled:

I was stationed at the Base Camp at Trinity in a position about ten miles [16 km] from the site of the explosion... About 40 seconds after the explosion the air blast reached me. I tried to estimate its strength by dropping from about six feet small pieces of paper before, during, and after the passage of the blast wave. Since, at the time, there was no wind[,] I could observe very distinctly and actually measure the displacement of the pieces of paper that were in the process of falling while the blast was passing. The shift was about 2 1/2 meters, which, at the time, I estimated to correspond to the blast that would be produced by ten thousand tonnes of TNT.

The surface area (A) and volume (V) of a sphere are and respectively.

The blast wave, however, was likely assumed to grow out as the surface area of the approximately hemispheric near surface burst blast wave of the Trinity gadget. The paper is moved 2.5 meters by the wave, so the effect of the Trinity device is to displace a hemispherical shell of air of volume 2.5 m × 2π(16 km)2. Multiply by 1 atm to get an energy of 4.1×1014 J ~ 100 kT TNT.

This photograph of the Trinity blast, captured by Berlyn Brixner, was used by G. I. Taylor to estimate its yield.

A good approximation of the yield of the Trinity test device was obtained in 1950 by the British physicist G. I. Taylor from simple dimensional analysis and an estimation of the heat capacity for very hot air. Taylor had initially done this highly classified work in mid-1941 and published an article with an analysis of the Trinity data fireball when the Trinity photograph data was declassified in 1950 (after the USSR had exploded its own version of this bomb).

Taylor noted that the radius R of the blast should initially depend only on the energy E of the explosion, the time t after the detonation, and the density ρ of the air. The only equation having compatible dimensions that can be constructed from these quantities is

Here S is a dimensionless constant having a value approximately equal to 1, since it is low-order function of the heat capacity ratio or adiabatic index

which is approximately 1 for all conditions.

Using the picture of the Trinity test shown here (which had been publicly released by the U.S. government and published in Life magazine), using successive frames of the explosion, Taylor found that R5/t2 is a constant in a given nuclear blast (especially between 0.38 ms, after the shock wave has formed, and 1.93 ms, before significant energy is lost by thermal radiation). Furthermore, he estimated a value for S numerically at 1.

Thus, with t = 0.025 s and the blast radius being 140 metres, and taking ρ to be 1 kg/m3 (the measured value at Trinity on the day of the test, as opposed to sea-level values of approximately 1.3 kg/m3) and solving for E, Taylor obtained that the yield was about 22 kilotonnes of TNT (90 TJ). This does not take into account the fact that the energy should only be about half this value for a hemispherical blast, but this very simple argument did agree to within 10% with the official value of the bomb's yield in 1950, which was 20 kilotons of TNT (84 TJ) (see G. I. Taylor, Proc. Roy. Soc. London A 200, pp. 235–247 (1950)).

A good approximation to Taylor's constant S for below about 2 is

The value of the heat capacity ratio here is between the 1.67 of fully dissociated air molecules and the lower value for very hot diatomic air (1.2), and under conditions of an atomic fireball is (coincidentally) close to the STP (standard) gamma for room-temperature air, which is 1.4. This gives the value of Taylor's S constant to be 1.036 for the adiabatic hypershock region where the constant R5/t2 condition holds.

As it relates to fundamental dimensional analysis, if one expresses all the variables in terms of mass M, length L, and time T:

(think of the expression for kinetic energy, ),

and then derive an expression for, say, E, in terms of the other variables, by finding values of , , and in the general relation

such that the left and right sides are dimensionally balanced in terms of M, L, and T (i.e., each dimension has the same exponent on both sides).

Other methods and controversy

Where these data are not available, as in a number of cases, precise yields have been in dispute, especially when they are tied to questions of politics. The weapons used in the atomic bombings of Hiroshima and Nagasaki, for example, were highly individual and very idiosyncratic designs, and gauging their yield retrospectively has been quite difficult. The Hiroshima bomb, "Little Boy", is estimated to have been between 12 and 18 kilotonnes of TNT (50 and 75 TJ) (a 20% margin of error), while the Nagasaki bomb, "Fat Man", is estimated to be between 18 and 23 kilotonnes of TNT (75 and 96 TJ) (a 10% margin of error).

Such apparently small changes in values can be important when trying to use the data from these bombings as reflective of how other bombs would behave in combat, and also result in differing assessments of how many "Hiroshima bombs" other weapons are equivalent to (for example, the Ivy Mike hydrogen bomb was equivalent to either 867 or 578 Hiroshima weapons — a rhetorically quite substantial difference — depending on whether one uses the high or low figure for the calculation).

Other disputed yields have included the massive Tsar Bomba, whose yield was claimed between being "only" 50 megatonnes of TNT (210 PJ) or at a maximum of 57 megatonnes of TNT (240 PJ) by differing political figures, either as a way for hyping the power of the bomb or as an attempt to undercut it.

Mass media regulation

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Mass_media_regulation

Mass media regulations are rules enforced by the jurisdiction of law. Guidelines for media use differ across the world. This regulation, via law, rules or procedures, can have various goals, for example intervention to protect a stated "public interest", or encouraging competition and an effective media market, or establishing common technical standards.

The principal targets of mass media regulation are the press, radio and television, but may also include film, recorded music, cable, satellite, storage and distribution technology (discs, tapes etc.), the internet, mobile phones etc.

Principal foundations

  • Balance between positive and negative defined liberties.
The negative defined liberties, legislating the role of media institutions in society and securing their freedom of expression, publication, private ownership, commerce, and enterprise, must be balanced by legislation ensuring the positive freedom of citizens of their access to information.
  • Balance between state and market.
Media is at a position between the commerce and democracy.

These require the balance between rights and obligations. To maintain the contractual balance, society expects the media to take their privilege responsibly. Besides, market forces failed to guarantee the wide range of public opinions and free expression. Intend to the expectation and ensurance, regulation over the media formalized.

Public service

Commercial mass media controlled by economic market forces are not always delivering a product that satisfies all needs. Children and minority interests are not always serviced well. Political news are often trivialized and reduced to tabloid journalism, slogans, sound bites, spin, horse race reporting, celebrity scandals, populism, and infotainment.

This is regarded as a problem for the democratic process when the commercial news media fail to provide balanced and thorough coverage of political issues and debates.

Many countries in Europe and Japan have implemented publicly funded media with public service obligations in order to meet the needs that are not satisfied by free commercial media. However, the public service media are under increasing pressure due to competition from commercial media, as well as political pressure.

Other countries, including the US, have weak or non-existing public service media.

By country

Egypt

Egypt's regulation laws encompass media and journalism publishing. Any form of press release to the public that goes against the Egyptian Constitution can be subject to punishment by these laws. This law was put in place to regulate the circulation of misinformation online. Legal action can be taken on those who share false facts. Egypt's Supreme Council for Media Regulations (SCMR) will be authorised to place people with more than 5,000 followers on social media or with a personal blog or website under supervision. More than 500 websites have already been blocked in Egypt prior to the new law in 2018. Websites must go through Egypt's “Supreme Council for the Administration of the Media” to acquire a license to publish a website.

Media regulation in Egypt has always been limited, but as in recent years, it has become even more limited. In 2018, a law was put in place to prevent the press and any media outlet from putting out content that violates the Egyptian Constitution, and/or contain any “violence, racism, hatred, or extremism.” If any content causes national security concerns or is broadcast as ‘false news’, the Egyptian Government will put a ban on those media outlets that produced that media. The law known as ‘The SCMR Law’, creates a media regulatory restriction plan that allows the government authorities to be able to block the content and those who want to be able to produce content, or be able to publish a website, have to obtain a license. In order to do that, those would need to go to Egypt's “Supreme Council for the Administration of the Media.”

China

At the early period of the modern history of China, the relationship between government and society was extremely unbalanced. Government held power over the Chinese people and controlled the media, making the media highly political.

The economic reform decreased the governing function of media and created a tendency for mass media to stand for the society but not only authority. The previous unbalanced structure between powered government and weak society was loosed by the policy in some level, but not truly changed until the emergence of Internet. At first the regulator did not regard Internet as a category of mass media but a technique of business. Underestimating the power of the internet as a communications tool resulted in a lack of internet regulation. Since then, the internet has changed communication methods, media structure and overthrown the pattern of public voice expression in China.

Regulators have not and would not let the Internet out of control. In recent years, the strategy when approaching the Internet has been to regulate while developing.

The internet regulation in China generally formed by:

  • Legislation
China is the one who owns the greatest amount of legislation in the world. According to statistics, up to October 2008, 14 different departments such as the NPC of China, the Publicity Department of the Chinese Communist Party, and the State Council Information Office, had been published more than 60 laws related to internet regulation.
  • Administration
Internet regulation departments in China have respective distribution of work. Ministry of Industry and Information Technology is responsible for the development and regulation of the industry, Ministry of Public Security regulates security and fights crimes, and the Propaganda Department leads the system where departments of culture, broadcasting, journalism, education, etc. regulates the information contents.
  • Technical control
The Internet regulation departments restrain the wrongful expression and behaviors by techniques such like blocking information negative to social stable and carrying out real name system through Internet.
  • Agenda control
It requires communicators to set up the relationship between expected information targets and the real targets, guide the direction of information to reach the expectation.
  • Structure adjustment
Traditional media affiliated into government strives to develop Internet with relatively flexible administrating system to increase the communicating power of mainstream media of authority to compete with social communication.
  • Training
Regulator delivers the expectation of Internet environment to the population through training and educating to intense people’s conscious about behavior norms.

The European Union

Most EU member states have replaced media ownership regulations with competition laws. These laws are created by governing bodies to protect consumers from predatory business practices by ensuring that fair competition exists in an open-market economy. However, these laws cannot solve the problem of convergence and concentration of media.

Norway

The media systems in Scandinavian countries are twin-duopolistic with powerful public service broadcasting and periodic strong government intervention. Hallin and Mancini introduced the Norwegian media system as Democratic Corporatist. Newspapers started early and developed very well without state regulation until the 1960s. The rise of the advertising industry helped the most powerful newspapers grow increasingly, while the little publications were struggling at the bottom of the market. Because of the lack of diversity in the newspaper industry, the Norwegian Government took action, affecting the true freedom of speech. In 1969, Norwegian government started to provide press subsidies to small local newspapers. But this method was not able to solve the problem completely. In 1997, compelled by the concern of the media ownership concentration, Norwegian legislators passed the Media Ownership Act entrusting the Norwegian Media Authority the power to interfere the media cases when the press freedom and media plurality was threatened. The Act was amended in 2005 and 2006 and revised in 2013.

The basic foundation of Norwegian regulation of the media sector is to ensure freedom of speech, structural pluralism, national language and culture and the protection of children from harmful media content. Relative regulatory incentives includes the Media Ownership Law, the Broadcasting Act, and the Editorial Independence Act. NOU 1988:36 stated that a fundamental premise of all Norwegian media regulation is that news media serves as an oppositional force to power. The condition for news media to achieve this role is the peaceful environment of diversity of editorial ownership and free speech. White Paper No.57 claimed that real content diversity can only be attained by a pluralistically owned and independent editorial media whose production is founded on the principles of journalistic professionalism. To ensure this diversity, Norwegian government regulates the framework conditions of the media and primarily focuses the regulation on pluralistic ownership.

United Kingdom

Following the Leveson Inquiry the Press Recognition Panel (PRP) was set up under the Royal Charter on self-regulation of the press to judge whether press regulators meet the criteria recommended by the Leveson Inquiry for recognition under the Charter. By 2016 the UK had two new press regulatory bodies: the Independent Press Standards Organisation (IPSO), which regulates most national newspapers and many other media outlets; and IMPRESS, which regulates a much smaller number of outlets but is the only press regulator recognised by the PRP (since October 2016). Ofcom also oversees the use of social media and devices in the United Kingdom. BBC reports that Ofcom analyzes media use of the youth (ages 3 to 15 years old) to gather information of how the United Kingdom utilizes their media.

Broadcast media (TV, radio, video on demand), telecommunications, and postal services are regulated by Ofcom.

United States

The First Amendment to the United States Constitution forbids the government from abridging freedom of speech or freedom of the press. However, there are certain exceptions to free speech. For example, there are regulations on public broadcasters: the Federal Communications Commission forbids the broadcast of "indecent" material on the public airwaves. The accidental exposure of Janet Jackson's nipple during the halftime show at Super Bowl XXXVIII led to the passage of the Broadcast Decency Enforcement Act of 2005 which increased the maximum fine that the FCC could level for indecent broadcasts from $32,500 to $325,000—with a maximum liability of $3 million. This is to shield younger individuals from expressions and ideas that are deemed offensive. The Supreme Court of the United States has yet to touch the internet, but that could change if net neutrality comes into play.

Brazil

Brazil’s constitution, written in 1988, guarantees freedom of expression without censorship. It also protects privacy of communications unless by court order. Journalists in Brazil are protected under the constitution and are able to report freely Many media outlets in Brazil are owned or invested in by its politicians that have an influence on their editorial decisions. Much of Brazil’s media regulations change with their change in government, the current government has had very little expansion of laws regarding media regulation past freedom of speech guaranteed in their constitution. One of the laws the current government has put out is a new decree that aims to curb the arbitrary removal of social media accounts.

Fiji

In June 2010, the Fijian Government passed the Media Industry Development Decree of 2010 establishing the Media Industry Development Authority of Fiji which enforces media ethics governing all media organizations in Fiji. The Authority has implemented penalties which includes fines and imprisonment in case of any ethical breaches. The aim of the decree is to promote a balance, fair and accurate reporting in Fiji.

Criticism

Anthony Lowstedt and Sulaiman Al-Wahid suggested that the authority need to issue diverse media laws centering at anti-monopoly and anti-oligopoly with democratic legitimacy since media outlets are important for national security and social stability. The global regulation of new media technologies is to ensure the cultural diversity in media content, and provide a free space of public access and various opinions and ideas without censorship. Also, the regulation protects the independence of media ownership from dominance of powerful financial corporations, and preserves the media from commercial and political hegemony.

In China, the possibility that a film approved by Central Board of Film Censors can be banned due to the disagreement of a specific leading cadre has never been eliminated. The Chinese screenwriter Wang Xingdong stated that regulation over literature and art should be based on laws and not the preference of some individuals. In the field of media, relative legislation must be introduced as soon as possible and applied strictly to avoid the case that some leaders overwhelm the law with their power to control the media content.

Time geography

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Time_geography

Time geography or time-space geography is an evolving transdisciplinary perspective on spatial and temporal processes and events such as social interaction, ecological interaction, social and environmental change, and biographies of individuals. Time geography "is not a subject area per se", but rather an integrative ontological framework and visual language in which space and time are basic dimensions of analysis of dynamic processes. Time geography was originally developed by human geographers, but today it is applied in multiple fields related to transportation, regional planning, geography, anthropology, time-use research, ecology, environmental science, and public health. According to Swedish geographer Bo Lenntorp: "It is a basic approach, and every researcher can connect it to theoretical considerations in her or his own way."

Origins

The Swedish geographer Torsten Hägerstrand created time geography in the mid-1960s based on ideas he had developed during his earlier empirical research on human migration patterns in Sweden. He sought "some way of finding out the workings of large socio-environmental mechanisms" using "a physical approach involving the study of how events occur in a time-space framework". Hägerstrand was inspired in part by conceptual advances in spacetime physics and by the philosophy of physicalism.

Hägerstrand's earliest formulation of time geography informally described its key ontological features: "In time-space the individual describes a path" within a situational context; "life paths become captured within a net of constraints, some of which are imposed by physiological and physical necessities and some imposed by private and common decisions". "It would be impossible to offer a comprehensive taxonomy of constraints seen as time-space phenomena", Hägerstrand said, but he "tentatively described" three important classes of constraints:

  • capability constraints — limitations on the activity of individuals because of their biological structure and/or the tools they can command,
  • coupling constraints — limitations that "define where, when, and for how long, the individual has to join other individuals, tools, and materials in order to produce, consume, and transact" (closely related to critical path analysis), and
  • authority constraints — limitations on the domain or "time-space entity within which things and events are under the control of a given individual or a given group".
A space-time cube is a three-axis graph where one axis represents the time dimension and the other axes represent two spatial dimensions
Examples of the visual language of time geography: space-time cube, path, prism, bundle, and other concepts

Hägerstrand illustrated these concepts with novel forms of graphical notation (inspired in part by musical notation), such as:

  • the space-time aquarium (or space-time cube), which displays individual paths in axonometric graphical projection of space and time coordinates;
  • the space-time prism, which shows individuals' possible behavior in time-space given their capability constraints and coupling constraints;
  • bundles of paths, which are the conjunction of individual paths due in part to their capability constraints and coupling constraints, and which help to create "pockets of local order";
  • concentric tubes or rings of accessibility, which indicate certain capability constraints of a given individual, such as limited spatial size and limited manual, oral-auditive and visual range; and
  • nested hierarchies of domains, which show the authority constraints for a given individual or a given group.

While this innovative visual language is an essential feature of time geography, Hägerstrand's colleague Bo Lenntorp emphasized that it is the product of an underlying ontology, and "not the other way around. The notation system is a very useful tool, but it is a rather poor reflection of a rich world-view. In many cases, the notational apparatus has been the hallmark of time geography. However, the underlying ontology is the most important feature." Time geography is not only about time-geographic diagrams, just as music is not only about musical notation. Hägerstrand later explained: "What is briefly alluded to here is a 4-dimensional world of forms. This cannot be completely graphically depicted. On the other hand one ought to be able to imagine it with sufficient clarity for it to be of guidance in empirical and theoretical research."

By 1981, geographers Nigel Thrift and Allan Pred were already defending time geography against those who would see it "merely as a rigid descriptive model of spatial and temporal organization which lends itself to accessibility constraint analysis (and related exercises in social engineering)." They argued that time geography is not just a model of constraints; it is a flexible and evolving way of thinking about reality that can complement a wide variety of theories and research methods. In the decades since then, Hägerstrand and others have made efforts to expand his original set of concepts. By the end of his life, Hägerstrand had ceased using the phrase "time geography" to refer to this way of thinking and instead used words like topoecology.

Later developments

Schematic and example of a space-time prism using transit network data: On the right is a schematic diagram of a space-time prism, and on the left is a map of the potential path area for two different time budgets.

Since the 1980s, time geography has been used by researchers in the social sciences, the biological sciences, and in interdisciplinary fields.

In 1993, British geographer Gillian Rose noted that "time-geography shares the feminist interest in the quotidian paths traced by people, and again like feminism, links such paths, by thinking about constraints, to the larger structures of society." However, she noted that time geography had not been applied to issues important to feminists, and she called it a form of "social science masculinity". Over the following decades, feminist geographers have revisited time geography and have begun to use it as a tool to address feminist issues.

GIS software has been developed to compute and analyze time-geographic problems at a variety of spatial scales. Such analyses have used different types of network datasets (such as walking networks, highway networks, and public transit schedules) as well as a variety of visualization strategies. Specialized software such as GeoTime has been developed to facilitate time-geographic visualization and visual analytics.

Time geography has also been used as a form of therapeutic assessment in mental health.

Benjamin Bach and colleagues have generalized the space-time cube into a framework for temporal data visualization that applies to all data that can be represented in two dimensions plus time.

In the COVID-19 pandemic, time geography approaches were applied to identify close contacts. The pandemic imposed restrictions on the physical mobility of humans, which invited new applications of time geography in the increasingly virtualized post-Covid era.

World line

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/World_line

The world line (or worldline) of an object is the path that an object traces in 4-dimensional spacetime. It is an important concept of modern physics, and particularly theoretical physics.

The concept of a "world line" is distinguished from concepts such as an "orbit" or a "trajectory" (e.g., a planet's orbit in space or the trajectory of a car on a road) by inclusion of the dimension time, and typically encompasses a large area of spacetime wherein paths which are straight perceptually are rendered as curves in space-time to show their (relatively) more absolute position states—to reveal the nature of special relativity or gravitational interactions.

The idea of world lines was originated by physicists and was pioneered by Hermann Minkowski. The term is now used most often in the context of relativity theories (i.e., special relativity and general relativity).

Usage in physics

A world line of an object (generally approximated as a point in space, e.g., a particle or observer) is the sequence of spacetime events corresponding to the history of the object. A world line is a special type of curve in spacetime. Below an equivalent definition will be explained: A world line is a time-like curve in spacetime. Each point of a world line is an event that can be labeled with the time and the spatial position of the object at that time.

For example, the orbit of the Earth in space is approximately a circle, a three-dimensional (closed) curve in space: the Earth returns every year to the same point in space relative to the sun. However, it arrives there at a different (later) time. The world line of the Earth is therefore helical in spacetime (a curve in a four-dimensional space) and does not return to the same point.

Spacetime is the collection of events, together with a continuous and smooth coordinate system identifying the events. Each event can be labeled by four numbers: a time coordinate and three space coordinates; thus spacetime is a four-dimensional space. The mathematical term for spacetime is a four-dimensional manifold (a topological space that locally resembles Euclidean space near each point). The concept may be applied as well to a higher-dimensional space. For easy visualizations of four dimensions, two space coordinates are often suppressed. An event is then represented by a point in a Minkowski diagram, which is a plane usually plotted with the time coordinate, say , vertically, and the space coordinate, say , horizontally. As expressed by F.R. Harvey

A curve M in [spacetime] is called a worldline of a particle if its tangent is future timelike at each point. The arclength parameter is called proper time and usually denoted τ. The length of M is called the proper time of the particle. If the worldline M is a line segment, then the particle is said to be in free fall.

A world line traces out the path of a single point in spacetime. A world sheet is the analogous two-dimensional surface traced out by a one-dimensional line (like a string) traveling through spacetime. The world sheet of an open string (with loose ends) is a strip; that of a closed string (a loop) resembles a tube.

Once the object is not approximated as a mere point but has extended volume, it traces not a world line but rather a world tube.

World lines as a method of describing events

World line, worldsheet, and world volume, as they are derived from particles, strings, and branes.

A one-dimensional line or curve can be represented by the coordinates as a function of one parameter. Each value of the parameter corresponds to a point in spacetime and varying the parameter traces out a line. So in mathematical terms a curve is defined by four coordinate functions (where usually denotes the time coordinate) depending on one parameter . A coordinate grid in spacetime is the set of curves one obtains if three out of four coordinate functions are set to a constant.

Sometimes, the term world line is used informally for any curve in spacetime. This terminology causes confusions. More properly, a world line is a curve in spacetime that traces out the (time) history of a particle, observer or small object. One usually uses the proper time of an object or an observer as the curve parameter along the world line.

Trivial examples of spacetime curves

Three different world lines representing travel at different constant four-velocities. t is time and x distance.

A curve that consists of a horizontal line segment (a line at constant coordinate time), may represent a rod in spacetime and would not be a world line in the proper sense. The parameter simply traces the length of the rod.

A line at constant space coordinate (a vertical line using the convention adopted above) may represent a particle at rest (or a stationary observer). A tilted line represents a particle with a constant coordinate speed (constant change in space coordinate with increasing time coordinate). The more the line is tilted from the vertical, the larger the speed.

Two world lines that start out separately and then intersect, signify a collision or "encounter". Two world lines starting at the same event in spacetime, each following its own path afterwards, may represent e.g. the decay of a particle into two others or the emission of one particle by another.

World lines of a particle and an observer may be interconnected with the world line of a photon (the path of light) and form a diagram depicting the emission of a photon by a particle that is subsequently observed by the observer (or absorbed by another particle).

Tangent vector to a world line: four-velocity

The four coordinate functions defining a world line, are real number functions of a real variable and can simply be differentiated by the usual calculus. Without the existence of a metric (this is important to realize) one can imagine the difference between a point on the curve at the parameter value and a point on the curve a little (parameter ) farther away. In the limit , this difference divided by defines a vector, the tangent vector of the world line at the point . It is a four-dimensional vector, defined in the point . It is associated with the normal 3-dimensional velocity of the object (but it is not the same) and therefore termed four-velocity , or in components:

such that the derivatives are taken at the point , so at .

All curves through point p have a tangent vector, not only world lines. The sum of two vectors is again a tangent vector to some other curve and the same holds for multiplying by a scalar. Therefore, all tangent vectors for a point p span a linear space, termed the tangent space at point p. For example, taking a 2-dimensional space, like the (curved) surface of the Earth, its tangent space at a specific point would be the flat approximation of the curved space.

World lines in special relativity

So far a world line (and the concept of tangent vectors) has been described without a means of quantifying the interval between events. The basic mathematics is as follows: The theory of special relativity puts some constraints on possible world lines. In special relativity the description of spacetime is limited to special coordinate systems that do not accelerate (and so do not rotate either), termed inertial coordinate systems. In such coordinate systems, the speed of light is a constant. The structure of spacetime is determined by a bilinear form η, which gives a real number for each pair of events. The bilinear form is sometimes termed a spacetime metric, but since distinct events sometimes result in a zero value, unlike metrics in metric spaces of mathematics, the bilinear form is not a mathematical metric on spacetime.

World lines of freely falling particles/objects are called geodesics. In special relativity these are straight lines in Minkowski space.

Often the time units are chosen such that the speed of light is represented by lines at a fixed angle, usually at 45 degrees, forming a cone with the vertical (time) axis. In general, useful curves in spacetime can be of three types (the other types would be partly one, partly another type):

  • light-like curves, having at each point the speed of light. They form a cone in spacetime, dividing it into two parts. The cone is three-dimensional in spacetime, appears as a line in drawings with two dimensions suppressed, and as a cone in drawings with one spatial dimension suppressed.
An example of a light cone, the three-dimensional surface of all possible light rays arriving at and departing from a point in spacetime. Here, it is depicted with one spatial dimension suppressed.
The momentarily co-moving inertial frames along the trajectory ("world line") of a rapidly accelerating observer (center). The vertical direction indicates time, while the horizontal indicates distance, the dashed line is the spacetime of the observer. The small dots are specific events in spacetime. Note how the momentarily co-moving inertial frame changes when the observer accelerates.
  • time-like curves, with a speed less than the speed of light. These curves must fall within a cone defined by light-like curves. In our definition above: world lines are time-like curves in spacetime.
  • space-like curves falling outside the light cone. Such curves may describe, for example, the length of a physical object. The circumference of a cylinder and the length of a rod are space-like curves.

At a given event on a world line, spacetime (Minkowski space) is divided into three parts.

  • The future of the given event is formed by all events that can be reached through time-like curves lying within the future light cone.
  • The past of the given event is formed by all events that can influence the event (that is, that can be connected by world lines within the past light cone to the given event).
    • The lightcone at the given event is formed by all events that can be connected through light rays with the event. When we observe the sky at night, we basically see only the past light cone within the entire spacetime.
  • Elsewhere is the region between the two light cones. Points in an observer's elsewhere are inaccessible to them; only points in the past can send signals to the observer. In ordinary laboratory experience, using common units and methods of measurement, it may seem that we look at the present, but in fact there is always a delay time for light to propagate. For example, we see the Sun as it was about 8 minutes ago, not as it is "right now". Unlike the present in Galilean/Newtonian theory, the elsewhere is thick; it is not a 3-dimensional volume but is instead a 4-dimensional spacetime region.
    • Included in "elsewhere" is the simultaneous hyperplane, which is defined for a given observer by a space that is hyperbolic-orthogonal to their world line. It is really three-dimensional, though it would be a 2-plane in the diagram because we had to throw away one dimension to make an intelligible picture. Although the light cones are the same for all observers at a given spacetime event, different observers, with differing velocities but coincident at the event (point) in the spacetime, have world lines that cross each other at an angle determined by their relative velocities, and thus they have different simultaneous hyperplanes.
    • The present often means the single spacetime event being considered.

Simultaneous hyperplane

Since a world line determines a velocity 4-vector that is time-like, the Minkowski form determines a linear function by Let N be the null space of this linear functional. Then N is called the simultaneous hyperplane with respect to v. The relativity of simultaneity is a statement that N depends on v. Indeed, N is the orthogonal complement of v with respect to η. When two world lines u and w are related by then they share the same simultaneous hyperplane. This hyperplane exists mathematically, but physical relations in relativity involve the movement of information by light. For instance, the traditional electro-static force described by Coulomb's law may be pictured in a simultaneous hyperplane, but relativistic relations of charge and force involve retarded potentials.

World lines in general relativity

The use of world lines in general relativity is basically the same as in special relativity, with the difference that spacetime can be curved. A metric exists and its dynamics are determined by the Einstein field equations and are dependent on the mass-energy distribution in spacetime. Again the metric defines lightlike (null), spacelike and timelike curves. Also, in general relativity, world lines are timelike curves in spacetime, where timelike curves fall within the lightcone. However, a lightcone is not necessarily inclined at 45 degrees to the time axis. However, this is an artifact of the chosen coordinate system, and reflects the coordinate freedom (diffeomorphism invariance) of general relativity. Any timelike curve admits a comoving observer whose "time axis" corresponds to that curve, and, since no observer is privileged, we can always find a local coordinate system in which lightcones are inclined at 45 degrees to the time axis. See also for example Eddington-Finkelstein coordinates.

World lines of free-falling particles or objects (such as planets around the Sun or an astronaut in space) are called geodesics.

World lines in quantum field theory

Quantum field theory, the framework in which all of modern particle physics is described, is usually described as a theory of quantized fields. However, although not widely appreciated, it has been known since Feynman that many quantum field theories may equivalently be described in terms of world lines. The world line formulation of quantum field theory has proved particularly fruitful for various calculations in gauge theories and in describing nonlinear effects of electromagnetic fields.

World lines in literature

In 1884 C. H. Hinton wrote an essay "What is the fourth dimension ?", which he published as a scientific romance. He wrote

Why, then, should not the four-dimensional beings be ourselves, and our successive states the passing of them through the three-dimensional space to which our consciousness is confined.[8]: 18–19 

A popular description of human world lines was given by J. C. Fields at the University of Toronto in the early days of relativity. As described by Toronto lawyer Norman Robertson:

I remember [Fields] lecturing at one of the Saturday evening lectures at the Royal Canadian Institute. It was advertised to be a "Mathematical Fantasy"—and it was! The substance of the exercise was as follows: He postulated that, commencing with his birth, every human being had some kind of spiritual aura with a long filament or thread attached, that traveled behind him throughout his life. He then proceeded in imagination to describe the complicated entanglement every individual became involved in his relationship to other individuals, comparing the simple entanglements of youth to those complicated knots that develop in later life.

Kurt Vonnegut, in his novel Slaughterhouse-Five, describes the worldlines of stars and people:

“Billy Pilgrim says that the Universe does not look like a lot of bright little dots to the creatures from Tralfamadore. The creatures can see where each star has been and where it is going, so that the heavens are filled with rarefied, luminous spaghetti. And Tralfamadorians don't see human beings as two-legged creatures, either. They see them as great millepedes - "with babies' legs at one end and old people's legs at the other," says Billy Pilgrim.”

Almost all science-fiction stories which use this concept actively, such as to enable time travel, oversimplify this concept to a one-dimensional timeline to fit a linear structure, which does not fit models of reality. Such time machines are often portrayed as being instantaneous, with its contents departing one time and arriving in another—but at the same literal geographic point in space. This is often carried out without note of a reference frame, or with the implicit assumption that the reference frame is local; as such, this would require either accurate teleportation, as a rotating planet, being under acceleration, is not an inertial frame, or for the time machine to remain in the same place, its contents 'frozen'.

Author Oliver Franklin published a science fiction work in 2008 entitled World Lines in which he related a simplified explanation of the hypothesis for laymen.

In the short story Life-Line, author Robert A. Heinlein describes the world line of a person:

He stepped up to one of the reporters. "Suppose we take you as an example. Your name is Rogers, is it not? Very well, Rogers, you are a space-time event having duration four ways. You are not quite six feet tall, you are about twenty inches wide and perhaps ten inches thick. In time, there stretches behind you more of this space-time event, reaching to perhaps nineteen-sixteen, of which we see a cross-section here at right angles to the time axis, and as thick as the present. At the far end is a baby, smelling of sour milk and drooling its breakfast on its bib. At the other end lies, perhaps, an old man someplace in the nineteen-eighties.
"Imagine this space-time event that we call Rogers as a long pink worm, continuous through the years, one end in his mother's womb, and the other at the grave..."

Heinlein's Methuselah's Children uses the term, as does James Blish's The Quincunx of Time (expanded from "Beep").

A visual novel named Steins;Gate, produced by 5pb., tells a story based on the shifting of world lines. Steins;Gate is a part of the "Science Adventure" series. World lines and other physical concepts like the Dirac Sea are also used throughout the series.

Neal Stephenson's novel Anathem involves a long discussion of worldlines over dinner in the midst of a philosophical debate between Platonic realism and nominalism.

Absolute Choice depicts different world lines as a sub-plot and setting device.

A space armada trying to complete a (nearly) closed time-like path as a strategic maneuver forms the backdrop and a main plot device of "Singularity Sky" by Charles Stross.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...