Search This Blog

Saturday, August 20, 2022

Genome

From Wikipedia, the free encyclopedia

A label diagram explaining the different parts of a prokaryotic genome
 
An image of the 46 chromosomes making up the diploid genome of a human male. (The mitochondrial chromosome is not shown.)

In the fields of molecular biology and genetics, a genome is all genetic information of an organism. It consists of nucleotide sequences of DNA (or RNA in RNA viruses). The nuclear genome includes protein-coding genes and non-coding genes, the other functional regions of the genome (see Non-coding DNA), and any junk DNA if it is present. Algae and plants contain chloroplasts with a chloroplast genome and almost all eukaryotes have mitochondria and a mitochondrial genome.

The study of the genome is called genomics. The genomes of many organisms have been sequenced and various regions have been annotated. The International Human Genome Project reported the sequence of the genome for Homo sapiens in 2004, although the initial "finished" sequence was missing 8% of the genome consisting mostly of repetitive sequences.

With advancements in technology that could handle sequencing of the many repetitive sequences found in human DNA that were not fully uncovered by the original Human Genome Project study, scientists reported the first end-to-end human genome sequence in March, 2022.

Origin of term

The term genome was created in 1920 by Hans Winkler, professor of botany at the University of Hamburg, Germany. The Oxford Dictionary suggests the name is a blend of the words gene and chromosome. However, see omics for a more thorough discussion. A few related -ome words already existed, such as biome and rhizome, forming a vocabulary into which genome fits systematically.

Defining the genome

It's very difficult to come up with a precise definition of "genome." It usually refers to the DNA (or sometimes RNA) molecules that carry the genetic information in an organism but sometimes it is difficult to decide which molecules to include in the definition; for example, bacteria usually have one or two large DNA molecules (chromosomes) that contain all of the essential genetic material but they also contain smaller extrachromosomal plasmid molecules that carry important genetic information. The definition of 'genome' that's commonly used in the scientific literature is usually restricted to the large chromosomal DNA molecules in bacteria.

Eukaryotic genomes are even more difficult to define because almost all eukaryotic species contain nuclear chromosomes plus extra DNA molecules in the mitochondria. In addition, algae and plants have chloroplast DNA. Most textbooks make a distinction between the nuclear genome and the organelle (mitochondria and chloroplast) genomes so when they speak of, say, the human genome, they are only referring to the genetic material in the nucleus. This is the most common use of 'genome' in the scientific literature.

Most eukaryotes are diploid, meaning that there are two copies of each chromosome in the nucleus but the 'genome' refers to only one copy of each chromosome. Some eukaryotes have distinctive sex chromosomes such as the X and Y chromosomes of mammals so the technical definition of the genome must include both copies of the sex chromosomes. When referring to the standard reference genome of humans, for example, it consists of one copy of each of the 22 autosomes plus one X chromosome and one Y chromosome.

Sequencing and mapping

A genome sequence is the complete list of the nucleotides (A, C, G, and T for DNA genomes) that make up all the chromosomes of an individual or a species. Within a species, the vast majority of nucleotides are identical between individuals, but sequencing multiple individuals is necessary to understand the genetic diversity.

Part of DNA sequence - prototypification of complete genome of virus

In 1976, Walter Fiers at the University of Ghent (Belgium) was the first to establish the complete nucleotide sequence of a viral RNA-genome (Bacteriophage MS2). The next year, Fred Sanger completed the first DNA-genome sequence: Phage Φ-X174, of 5386 base pairs. The first bacterial genome to be sequenced was that of Haemophilus influenzae, completed by a team at The Institute for Genomic Research in 1995. A few months later, the first eukaryotic genome was completed, with sequences of the 16 chromosomes of budding yeast Saccharomyces cerevisiae published as the result of a European-led effort begun in the mid-1980s. The first genome sequence for an archaeon, Methanococcus jannaschii, was completed in 1996, again by The Institute for Genomic Research.

The development of new technologies has made genome sequencing dramatically cheaper and easier, and the number of complete genome sequences is growing rapidly. The US National Institutes of Health maintains one of several comprehensive databases of genomic information. Among the thousands of completed genome sequencing projects include those for rice, a mouse, the plant Arabidopsis thaliana, the puffer fish, and the bacteria E. coli. In December 2013, scientists first sequenced the entire genome of a Neanderthal, an extinct species of humans. The genome was extracted from the toe bone of a 130,000-year-old Neanderthal found in a Siberian cave.

New sequencing technologies, such as massive parallel sequencing have also opened up the prospect of personal genome sequencing as a diagnostic tool, as pioneered by Manteia Predictive Medicine. A major step toward that goal was the completion in 2007 of the full genome of James D. Watson, one of the co-discoverers of the structure of DNA.

Whereas a genome sequence lists the order of every DNA base in a genome, a genome map identifies the landmarks. A genome map is less detailed than a genome sequence and aids in navigating around the genome. The Human Genome Project was organized to map and to sequence the human genome. A fundamental step in the project was the release of a detailed genomic map by Jean Weissenbach and his team at the Genoscope in Paris.

Reference genome sequences and maps continue to be updated, removing errors and clarifying regions of high allelic complexity. The decreasing cost of genomic mapping has permitted genealogical sites to offer it as a service, to the extent that one may submit one's genome to crowdsourced scientific endeavours such as DNA.LAND at the New York Genome Center, an example both of the economies of scale and of citizen science.

Viral genomes

Viral genomes can be composed of either RNA or DNA. The genomes of RNA viruses can be either single-stranded RNA or double-stranded RNA, and may contain one or more separate RNA molecules (segments: monopartit or multipartit genome). DNA viruses can have either single-stranded or double-stranded genomes. Most DNA virus genomes are composed of a single, linear molecule of DNA, but some are made up of a circular DNA molecule.

Prokaryotic genomes

Prokaryotes and eukaryotes have DNA genomes. Archaea and most bacteria have a single circular chromosome, however, some bacterial species have linear or multiple chromosomes. If the DNA is replicated faster than the bacterial cells divide, multiple copies of the chromosome can be present in a single cell, and if the cells divide faster than the DNA can be replicated, multiple replication of the chromosome is initiated before the division occurs, allowing daughter cells to inherit complete genomes and already partially replicated chromosomes. Most prokaryotes have very little repetitive DNA in their genomes. However, some symbiotic bacteria (e.g. Serratia symbiotica) have reduced genomes and a high fraction of pseudogenes: only ~40% of their DNA encodes proteins.

Some bacteria have auxiliary genetic material, also part of their genome, which is carried in plasmids. For this, the word genome should not be used as a synonym of chromosome.

Eukaryotic genomes

Eukaryotic genomes are composed of one or more linear DNA chromosomes. The number of chromosomes varies widely from Jack jumper ants and an asexual nemotode, which each have only one pair, to a fern species that has 720 pairs. It is surprising the amount of DNA that eukaryotic genomes contain compared to other genomes. The amount is even more than what is necessary for DNA protein-coding and noncoding genes due to the fact that eukaryotic genomes show as much as 64,000-fold variation in their sizes. However, this special characteristic is caused by the presence of repetitive DNA, and transposable elements (TEs).

A typical human cell has two copies of each of 22 autosomes, one inherited from each parent, plus two sex chromosomes, making it diploid. Gametes, such as ova, sperm, spores, and pollen, are haploid, meaning they carry only one copy of each chromosome. In addition to the chromosomes in the nucleus, organelles such as the chloroplasts and mitochondria have their own DNA. Mitochondria are sometimes said to have their own genome often referred to as the "mitochondrial genome". The DNA found within the chloroplast may be referred to as the "plastome". Like the bacteria they originated from, mitochondria and chloroplasts have a circular chromosome.

Unlike prokaryotes where exon-intron organization of protein coding genes exists but is rather exceptional, eukaryotes generally have these features in their genes and their genomes contain variable amounts of repetitive DNA. In mammals and plants, the majority of the genome is composed of repetitive DNA. Genes in eukaryotic genomes can be annotated using FINDER.

Coding sequences

DNA sequences that carry the instructions to make proteins are referred to as coding sequences. The proportion of the genome occupied by coding sequences varies widely. A larger genome does not necessarily contain more genes, and the proportion of non-repetitive DNA decreases along with increasing genome size in complex eukaryotes.

Composition of the human genome

Noncoding sequences

Noncoding sequences include introns, sequences for non-coding RNAs, regulatory regions, and repetitive DNA. Noncoding sequences make up 98% of the human genome. There are two categories of repetitive DNA in the genome: tandem repeats and interspersed repeats.

Tandem repeats

Short, non-coding sequences that are repeated head-to-tail are called tandem repeats. Microsatellites consisting of 2-5 basepair repeats, while minisatellite repeats are 30-35 bp. Tandem repeats make up about 4% of the human genome and 9% of the fruit fly genome. Tandem repeats can be functional. For example, telomeres are composed of the tandem repeat TTAGGG in mammals, and they play an important role in protecting the ends of the chromosome.

In other cases, expansions in the number of tandem repeats in exons or introns can cause disease. For example, the human gene huntingtin (Htt) typically contains 6–29 tandem repeats of the nucleotides CAG (encoding a polyglutamine tract). An expansion to over 36 repeats results in Huntington's disease, a neurodegenerative disease. Twenty human disorders are known to result from similar tandem repeat expansions in various genes. The mechanism by which proteins with expanded polygulatamine tracts cause death of neurons is not fully understood. One possibility is that the proteins fail to fold properly and avoid degradation, instead accumulating in aggregates that also sequester important transcription factors, thereby altering gene expression.

Tandem repeats are usually caused by slippage during replication, unequal crossing-over and gene conversion.

Transposable elements

Transposable elements (TEs) are sequences of DNA with a defined structure that are able to change their location in the genome. TEs are categorized as either as a mechanism that replicates by copy-and-paste or as a mechanism that can be excised from the genome and inserted at a new location. In the human genome, there are three important classes of TEs that make up more than 45% of the human DNA; these classes are The long interspersed nuclear elements (LINEs), The interspersed nuclear elements (SINEs), and endogenous retroviruses. These elements have a big potential to modify the genetic control in a host organism.

The movement of TEs is a driving force of genome evolution in eukaryotes because their insertion can disrupt gene functions, homologous recombination between TEs can produce duplications, and TE can shuffle exons and regulatory sequences to new locations.

Retrotransposons

Retrotransposons are found mostly in eukaryotes but not found in prokaryotes and retrotransposons form a large portion of genomes of many eukaryotes. Retrotransposon is a transposable element that transpose through an RNA intermediate. Retrotransposons are composed of DNA, but are transcribed into RNA for transposition, then the RNA transcript is copied back to DNA formation with the help of a specific enzyme called reverse transcriptase. Retrotransposons that carry reverse transcriptase in their gene can trigger its own transposition but the genes that lack the reverse transcriptase must use reverse transcriptase synthesized by another retrotransposon. Retrotransposons can be transcribed into RNA, which are then duplicated at another site into the genome. Retrotransposons can be divided into long terminal repeats (LTRs) and non-long terminal repeats (Non-LTRs).

Long terminal repeats (LTRs) are derived from ancient retroviral infections, so they encode proteins related to retroviral proteins including gag (structural proteins of the virus), pol (reverse transcriptase and integrase), pro (protease), and in some cases env (envelope) genes. These genes are flanked by long repeats at both 5' and 3' ends. It has been reported that LTRs consist of the largest fraction in most plant genome and might account for the huge variation in genome size.

Non-long terminal repeats (Non-LTRs) are classified as long interspersed nuclear elements (LINEs), short interspersed nuclear elements (SINEs), and Penelope-like elements (PLEs). In Dictyostelium discoideum, there is another DIRS-like elements belong to Non-LTRs. Non-LTRs are widely spread in eukaryotic genomes.

Long interspersed elements (LINEs) encode genes for reverse transcriptase and endonuclease, making them autonomous transposable elements. The human genome has around 500,000 LINEs, taking around 17% of the genome.

Short interspersed elements (SINEs) are usually less than 500 base pairs and are non-autonomous, so they rely on the proteins encoded by LINEs for transposition. The Alu element is the most common SINE found in primates. It is about 350 base pairs and occupies about 11% of the human genome with around 1,500,000 copies.

DNA transposons

DNA transposons encode a transposase enzyme between inverted terminal repeats. When expressed, the transposase recognizes the terminal inverted repeats that flank the transposon and catalyzes its excision and reinsertion in a new site. This cut-and-paste mechanism typically reinserts transposons near their original location (within 100kb). DNA transposons are found in bacteria and make up 3% of the human genome and 12% of the genome of the roundworm C. elegans.

Genome size

Log-log plot of the total number of annotated proteins in genomes submitted to GenBank as a function of genome size.

Genome size is the total number of the DNA base pairs in one copy of a haploid genome. Genome size varies widely across species. Invertebrates have small genomes, this is also correlated to a small number of transposable elements. Fish and Amphibians have intermediate-size genomes, and birds have relatively small genomes but it has been suggested that birds lost a substantial portion of their genomes during the phase of transition to flight.  Before this loss, DNA methylation allows the adequate expansion of the genome.

In humans, the nuclear genome comprises approximately 3.1 billion nucleotides of DNA, divided into 24 linear molecules, the shortest 45 000 000 nucleotides in length and the longest 248 000 000 nucleotides, each contained in a different chromosome. There is no clear and consistent correlation between morphological complexity and genome size in either prokaryotes or lower eukaryotes. Genome size is largely a function of the expansion and contraction of repetitive DNA elements.

Since genomes are very complex, one research strategy is to reduce the number of genes in a genome to the bare minimum and still have the organism in question survive. There is experimental work being done on minimal genomes for single cell organisms as well as minimal genomes for multi-cellular organisms (see Developmental biology). The work is both in vivo and in silico.

Genome size differences due to transposable elements

There are many enormous differences in size in genomes, specially mentioned before in the multicellular eukaryotic genomes. Much of this is due to the differing abundances of transposable elements, which evolve by creating new copies of themselves in the chromosomes. Eukaryote genomes often contain many thousands of copies of these elements, most of which have acquired mutations that make them defective.

Here is a table of some significant or representative genomes. See #See also for lists of sequenced genomes.

Organism type Organism Genome size
(base pairs)
Approx. no. of genes Note
Virus Porcine circovirus type 1 1,759 1.8 kB
Smallest viruses replicating autonomously in eukaryotic cells.
Virus Bacteriophage MS2 3,569 3.5 kB
First sequenced RNA-genome
Virus SV40 5,224 5.2 kB

Virus Phage Φ-X174 5,386 5.4 kB
First sequenced DNA-genome
Virus HIV 9,749 9.7 kB

Virus Phage λ 48,502 48.5 kB
Often used as a vector for the cloning of recombinant DNA.


Virus Megavirus 1,259,197 1.3 MB
Until 2013 the largest known viral genome.
Virus Pandoravirus salinus 2,470,000 2.47 MB
Largest known viral genome.
Eukaryotic organelle Human mitochondrion 16,569 16.6 kB

Bacterium Nasuia deltocephalinicola (strain NAS-ALF) 112,091 112 kB 137 Smallest known non-viral genome. Symbiont of leafhoppers.
Bacterium Carsonella ruddii 159,662 160 kB
An endosymbiont of psyllid insects
Bacterium Buchnera aphidicola 600,000 600 kB
An endosymbiont of aphids
Bacterium Wigglesworthia glossinidia 700,000 700Kb
A symbiont in the gut of the tsetse fly
Bacteriumcyanobacterium Prochlorococcus spp. (1.7 Mb) 1,700,000 1.7 MB 1,884 Smallest known cyanobacterium genome. One of the primary photosynthesizers on Earth.
Bacterium Haemophilus influenzae 1,830,000 1.8 MB
First genome of a living organism sequenced, July 1995
Bacterium Escherichia coli 4,600,000 4.6 MB 4,288
Bacterium – cyanobacterium Nostoc punctiforme 9,000,000 9 MB 7,432 7432 open reading frames
Bacterium Solibacter usitatus (strain Ellin 6076) 9,970,000 10 MB

Amoeboid Polychaos dubium ("Amoeba" dubia) 670,000,000,000 670 GB
Largest known genome. (Disputed)
Plant Genlisea tuberosa 61,000,000 61 MB
Smallest recorded flowering plant genome, 2014.
Plant Arabidopsis thaliana 135,000,000 135 MB 27,655 First plant genome sequenced, December 2000.
Plant Populus trichocarpa 480,000,000 480 MB 73,013 First tree genome sequenced, September 2006
Plant Fritillaria assyriaca 130,000,000,000 130 GB

Plant Paris japonica (Japanese-native, pale-petal) 150,000,000,000 150 GB
Largest plant genome known
Plantmoss Physcomitrella patens 480,000,000 480 MB
First genome of a bryophyte sequenced, January 2008.
Fungusyeast Saccharomyces cerevisiae 12,100,000 12.1 MB 6,294 First eukaryotic genome sequenced, 1996
Fungus Aspergillus nidulans 30,000,000 30 MB 9,541
Nematode Pratylenchus coffeae 20,000,000 20 MB
 Smallest animal genome known
Nematode Caenorhabditis elegans 100,300,000 100 MB 19,000 First multicellular animal genome sequenced, December 1998
Insect Drosophila melanogaster (fruit fly) 175,000,000 175 MB 13,600 Size variation based on strain (175-180Mb; standard y w strain is 175Mb)
Insect Apis mellifera (honey bee) 236,000,000 236 MB 10,157
Insect Bombyx mori (silk moth) 432,000,000 432 MB 14,623 14,623 predicted genes
Insect Solenopsis invicta (fire ant) 480,000,000 480 MB 16,569
Mammal Mus musculus 2,700,000,000 2.7 GB 20,210
Mammal Pan paniscus 3,286,640,000 3.3 GB 20,000 Bonobo - estimated genome size 3.29 billion bp
Mammal Homo sapiens 3,117,000,000 3.1 GB 20,000 Homo sapiens genome size estimated at 3.12 Gbp in 2022

Initial sequencing and analysis of the human genome

Bird Gallus gallus 1,043,000,000 1.0 GB 20,000
Fish Tetraodon nigroviridis (type of puffer fish) 385,000,000 390 MB
Smallest vertebrate genome known estimated to be 340 Mb – 385 Mb.
Fish Protopterus aethiopicus (marbled lungfish) 130,000,000,000 130 GB
Largest vertebrate genome known

Genomic alterations

All the cells of an organism originate from a single cell, so they are expected to have identical genomes; however, in some cases, differences arise. Both the process of copying DNA during cell division and exposure to environmental mutagens can result in mutations in somatic cells. In some cases, such mutations lead to cancer because they cause cells to divide more quickly and invade surrounding tissues. In certain lymphocytes in the human immune system, V(D)J recombination generates different genomic sequences such that each cell produces a unique antibody or T cell receptors.

During meiosis, diploid cells divide twice to produce haploid germ cells. During this process, recombination results in a reshuffling of the genetic material from homologous chromosomes so each gamete has a unique genome.

Genome-wide reprogramming

Genome-wide reprogramming in mouse primordial germ cells involves epigenetic imprint erasure leading to totipotency. Reprogramming is facilitated by active DNA demethylation, a process that entails the DNA base excision repair pathway. This pathway is employed in the erasure of CpG methylation (5mC) in primordial germ cells. The erasure of 5mC occurs via its conversion to 5-hydroxymethylcytosine (5hmC) driven by high levels of the ten-eleven dioxygenase enzymes TET1 and TET2.

Genome evolution

Genomes are more than the sum of an organism's genes and have traits that may be measured and studied without reference to the details of any particular genes and their products. Researchers compare traits such as karyotype (chromosome number), genome size, gene order, codon usage bias, and GC-content to determine what mechanisms could have produced the great variety of genomes that exist today (for recent overviews, see Brown 2002; Saccone and Pesole 2003; Benfey and Protopapas 2004; Gibson and Muse 2004; Reese 2004; Gregory 2005).

Duplications play a major role in shaping the genome. Duplication may range from extension of short tandem repeats, to duplication of a cluster of genes, and all the way to duplication of entire chromosomes or even entire genomes. Such duplications are probably fundamental to the creation of genetic novelty.

Horizontal gene transfer is invoked to explain how there is often an extreme similarity between small portions of the genomes of two organisms that are otherwise very distantly related. Horizontal gene transfer seems to be common among many microbes. Also, eukaryotic cells seem to have experienced a transfer of some genetic material from their chloroplast and mitochondrial genomes to their nuclear chromosomes. Recent empirical data suggest an important role of viruses and sub-viral RNA-networks to represent a main driving role to generate genetic novelty and natural genome editing.

In fiction

Works of science fiction illustrate concerns about the availability of genome sequences.

Michael Crichton's 1990 novel Jurassic Park and the subsequent film tell the story of a billionaire who creates a theme park of cloned dinosaurs on a remote island, with disastrous outcomes. A geneticist extracts dinosaur DNA from the blood of ancient mosquitoes and fills in the gaps with DNA from modern species to create several species of dinosaurs. A chaos theorist is asked to give his expert opinion on the safety of engineering an ecosystem with the dinosaurs, and he repeatedly warns that the outcomes of the project will be unpredictable and ultimately uncontrollable. These warnings about the perils of using genomic information are a major theme of the book.

The 1997 film Gattaca is set in a futurist society where genomes of children are engineered to contain the most ideal combination of their parents' traits, and metrics such as risk of heart disease and predicted life expectancy are documented for each person based on their genome. People conceived outside of the eugenics program, known as "In-Valids" suffer discrimination and are relegated to menial occupations. The protagonist of the film is an In-Valid who works to defy the supposed genetic odds and achieve his dream of working as a space navigator. The film warns against a future where genomic information fuels prejudice and extreme class differences between those who can and can't afford genetically engineered children.

Ozone layer

From Wikipedia, the free encyclopedia
 
Ozone-oxygen cycle in the ozone layer.

The ozone layer or ozone shield is a region of Earth's stratosphere that absorbs most of the Sun's ultraviolet radiation. It contains a high concentration of ozone (O3) in relation to other parts of the atmosphere, although still small in relation to other gases in the stratosphere. The ozone layer contains less than 10 parts per million of ozone, while the average ozone concentration in Earth's atmosphere as a whole is about 0.3 parts per million. The ozone layer is mainly found in the lower portion of the stratosphere, from approximately 15 to 35 kilometers (9 to 22 mi) above Earth, although its thickness varies seasonally and geographically.

The ozone layer was discovered in 1913 by the French physicists Charles Fabry and Henri Buisson. Measurements of the sun showed that the radiation sent out from its surface and reaching the ground on Earth is usually consistent with the spectrum of a black body with a temperature in the range of 5,500–6,000 K (5,230–5,730 °C), except that there was no radiation below a wavelength of about 310 nm at the ultraviolet end of the spectrum. It was deduced that the missing radiation was being absorbed by something in the atmosphere. Eventually the spectrum of the missing radiation was matched to only one known chemical, ozone. Its properties were explored in detail by the British meteorologist G. M. B. Dobson, who developed a simple spectrophotometer (the Dobsonmeter) that could be used to measure stratospheric ozone from the ground. Between 1928 and 1958, Dobson established a worldwide network of ozone monitoring stations, which continue to operate to this day. The "Dobson unit", a convenient measure of the amount of ozone overhead, is named in his honor.

The ozone layer absorbs 97 to 99 percent of the Sun's medium-frequency ultraviolet light (from about 200 nm to 315 nm wavelength), which otherwise would potentially damage exposed life forms near the surface.

In 1976, atmospheric research revealed that the ozone layer was being depleted by chemicals released by industry, mainly chlorofluorocarbons (CFCs). Concerns that increased UV radiation due to ozone depletion threatened life on Earth, including increased skin cancer in humans and other ecological problems, led to bans on the chemicals, and the latest evidence is that ozone depletion has slowed or stopped. The United Nations General Assembly has designated September 16 as the International Day for the Preservation of the Ozone Layer.

Venus also has a thin ozone layer at an altitude of 100 kilometers above the planet's surface.

Sources

The photochemical mechanisms that give rise to the ozone layer were discovered by the British physicist Sydney Chapman in 1930. Ozone in the Earth's stratosphere is created by ultraviolet light striking ordinary oxygen molecules containing two oxygen atoms (O2), splitting them into individual oxygen atoms (atomic oxygen); the atomic oxygen then combines with unbroken O2 to create ozone, O3. The ozone molecule is unstable (although, in the stratosphere, long-lived) and when ultraviolet light hits ozone it splits into a molecule of O2 and an individual atom of oxygen, a continuing process called the ozone-oxygen cycle. Chemically, this can be described as:

About 90 percent of the ozone in the atmosphere is contained in the stratosphere. Ozone concentrations are greatest between about 20 and 40 kilometres (66,000 and 131,000 ft), where they range from about 2 to 8 parts per million. If all of the ozone were compressed to the pressure of the air at sea level, it would be only 3 millimetres (18 inch) thick.

Ultraviolet light

UV-B energy levels at several altitudes. Blue line shows DNA sensitivity. Red line shows surface energy level with 10 percent decrease in ozone
 
Levels of ozone at various altitudes and blocking of different bands of ultraviolet radiation. Essentially all UV-C (100–280 nm) is blocked by dioxygen (from 100–200 nm) or else by ozone (200–280 nm) in the atmosphere. The shorter portion of the UV-C band and the more energetic UV above this band causes the formation of the ozone layer, when single oxygen atoms produced by UV photolysis of dioxygen (below 240 nm) react with more dioxygen. The ozone layer also blocks most, but not quite all, of the sunburn-producing UV-B (280–315 nm) band, which lies in the wavelengths longer than UV-C. The band of UV closest to visible light, UV-A (315–400 nm), is hardly affected by ozone, and most of it reaches the ground. UV-A does not primarily cause skin reddening, but there is evidence that it causes long-term skin damage.

Although the concentration of the ozone in the ozone layer is very small, it is vitally important to life because it absorbs biologically harmful ultraviolet (UV) radiation coming from the Sun. Extremely short or vacuum UV (10–100 nm) is screened out by nitrogen. UV radiation capable of penetrating nitrogen is divided into three categories, based on its wavelength; these are referred to as UV-A (400–315 nm), UV-B (315–280 nm), and UV-C (280–100 nm).

UV-C, which is very harmful to all living things, is entirely screened out by a combination of dioxygen (< 200 nm) and ozone (> about 200 nm) by around 35 kilometres (115,000 ft) altitude. UV-B radiation can be harmful to the skin and is the main cause of sunburn; excessive exposure can also cause cataracts, immune system suppression, and genetic damage, resulting in problems such as skin cancer. The ozone layer (which absorbs from about 200 nm to 310 nm with a maximal absorption at about 250 nm) is very effective at screening out UV-B; for radiation with a wavelength of 290 nm, the intensity at the top of the atmosphere is 350 million times stronger than at the Earth's surface. Nevertheless, some UV-B, particularly at its longest wavelengths, reaches the surface, and is important for the skin's production of vitamin D in mammals.

Ozone is transparent to most UV-A, so most of this longer-wavelength UV radiation reaches the surface, and it constitutes most of the UV reaching the Earth. This type of UV radiation is significantly less harmful to DNA, although it may still potentially cause physical damage, premature aging of the skin, indirect genetic damage, and skin cancer.

Distribution in the stratosphere

The thickness of the ozone layer varies worldwide and is generally thinner near the equator and thicker near the poles. Thickness refers to how much ozone is in a column over a given area and varies from season to season. The reasons for these variations are due to atmospheric circulation patterns and solar intensity.

The majority of ozone is produced over the tropics and is transported towards the poles by stratospheric wind patterns. In the northern hemisphere these patterns, known as the Brewer-Dobson circulation, make the ozone layer thickest in the spring and thinnest in the fall. When ozone is produced by solar UV radiation in the tropics, it is done so by circulation lifting ozone-poor air out of the troposphere and into the stratosphere where the sun photolyzes oxygen molecules and turns them into ozone. Then, the ozone-rich air is carried to higher latitudes and drops into lower layers of the atmosphere.

Research has found that the ozone levels in the United States are highest in the spring months of April and May and lowest in October. While the total amount of ozone increases moving from the tropics to higher latitudes, the concentrations are greater in high northern latitudes than in high southern latitudes, with spring ozone columns in high northern latitudes occasionally exceeding 600 DU and averaging 450 DU whereas 400 DU constituted a usual maximum in the Antarctic before anthropogenic ozone depletion. This difference occurred naturally because of the weaker polar vortex and stronger Brewer-Dobson circulation in the northern hemisphere owing to that hemisphere’s large mountain ranges and greater contrasts between land and ocean temperatures. The difference between high northern and southern latitudes has increased since the 1970s due to the ozone hole phenomenon. The highest amounts of ozone are found over the Arctic during the spring months of March and April, but the Antarctic has the lowest amounts of ozone during the summer months of September and October,

Brewer-Dobson circulation in the ozone layer.

Depletion

NASA projections of stratospheric ozone concentrations if chlorofluorocarbons had not been banned.

The ozone layer can be depleted by free radical catalysts, including nitric oxide (NO), nitrous oxide (N2O), hydroxyl (OH), atomic chlorine (Cl), and atomic bromine (Br). While there are natural sources for all of these species, the concentrations of chlorine and bromine increased markedly in recent decades because of the release of large quantities of man-made organohalogen compounds, especially chlorofluorocarbons (CFCs) and bromofluorocarbons. These highly stable compounds are capable of surviving the rise to the stratosphere, where Cl and Br radicals are liberated by the action of ultraviolet light. Each radical is then free to initiate and catalyze a chain reaction capable of breaking down over 100,000 ozone molecules. By 2009, nitrous oxide was the largest ozone-depleting substance (ODS) emitted through human activities.

The breakdown of ozone in the stratosphere results in reduced absorption of ultraviolet radiation. Consequently, unabsorbed and dangerous ultraviolet radiation is able to reach the Earth's surface at a higher intensity. Ozone levels have dropped by a worldwide average of about 4 percent since the late 1970s. For approximately 5 percent of the Earth's surface, around the north and south poles, much larger seasonal declines have been seen, and are described as "ozone holes". Let it be known that the "ozone holes" are actually patches in the ozone layer in which the ozone is thinner. The thinnest parts of the ozone are at the polar points of Earth's axis. The discovery of the annual depletion of ozone above the Antarctic was first announced by Joe Farman, Brian Gardiner and Jonathan Shanklin, in a paper which appeared in Nature on May 16, 1985.

Regulation attempts have included but not have been limited to the Clean Air Act implemented by the United States Environmental Protection Agency. The Clean Air Act introduced the requirement of National Ambient Air Quality Standards (NAAQS) with ozone pollutions being one of six criteria pollutants. This regulation has proven to be effective since counties, cities and tribal regions must abide by these standards and the EPA also provides assistance for each region to regulate contaminants. Effective presentation of information has also proven to be important in order to educate the general population of the existence and regulation of ozone depletion and contaminants. A scientific paper was written by Sheldon Ungar in which the author explores and studies how information about the depletion of the ozone, climate change and various related topics. The ozone case was communicated to lay persons "with easy-to-understand bridging metaphors derived from the popular culture" and related to "immediate risks with everyday relevance". The specific metaphors used in the discussion (ozone shield, ozone hole) proved quite useful and, compared to global climate change, the ozone case was much more seen as a "hot issue" and imminent risk. Lay people were cautious about a depletion of the ozone layer and the risks of skin cancer.

"Bad" ozone can cause adverse health risks respiratory effects (difficulty breathing) and is proven to be an aggravator of respiratory illnesses such as asthma, COPD and emphysema. That is why many countries have set in place regulations to improve "good" ozone and prevent the increase of "bad" ozone in urban or residential areas. In terms of ozone protection (the preservation of "good" ozone) the European Union has strict guidelines on what products are allowed to be bought, distributed or used in specific areas. With effective regulation, the ozone is expected to heal over time.

Levels of atmospheric ozone measured by satellite show clear seasonal variations and appear to verify their decline over time.
 

To support successful regulation attempts, the ozone case was communicated to lay persons "with easy-to-understand bridging metaphors derived from the popular culture" and related to "immediate risks with everyday relevance". The specific metaphors used in the discussion (ozone shield, ozone hole) proved quite useful and, compared to global climate change, the ozone case was much more seen as a "hot issue" and imminent risk. Lay people were cautious about a depletion of the ozone layer and the risks of skin cancer.

In 1978, the United States, Canada and Norway enacted bans on CFC-containing aerosol sprays that damage the ozone layer. The European Community rejected an analogous proposal to do the same. In the U.S., chlorofluorocarbons continued to be used in other applications, such as refrigeration and industrial cleaning, until after the discovery of the Antarctic ozone hole in 1985. After negotiation of an international treaty (the Montreal Protocol), CFC production was capped at 1986 levels with commitments to long-term reductions. This allowed for a ten-year phase-in for developing countries (identified in Article 5 of the protocol). Since that time, the treaty was amended to ban CFC production after 1995 in the developed countries, and later in developing countries. Today, all of the world's 197 countries have signed the treaty. Beginning January 1, 1996, only recycled and stockpiled CFCs were available for use in developed countries like the US. This production phaseout was possible because of efforts to ensure that there would be substitute chemicals and technologies for all ODS uses.

On August 2, 2003, scientists announced that the global depletion of the ozone layer may be slowing down because of the international regulation of ozone-depleting substances. In a study organized by the American Geophysical Union, three satellites and three ground stations confirmed that the upper-atmosphere ozone-depletion rate slowed significantly during the previous decade. Some breakdown can be expected to continue because of ODSs used by nations which have not banned them, and because of gases which are already in the stratosphere. Some ODSs, including CFCs, have very long atmospheric lifetimes, ranging from 50 to over 100 years. It has been estimated that the ozone layer will recover to 1980 levels near the middle of the 21st century. A gradual trend toward "healing" was reported in 2016.

Compounds containing C–H bonds (such as hydrochlorofluorocarbons, or HCFCs) have been designed to replace CFCs in certain applications. These replacement compounds are more reactive and less likely to survive long enough in the atmosphere to reach the stratosphere where they could affect the ozone layer. While being less damaging than CFCs, HCFCs can have a negative impact on the ozone layer, so they are also being phased out. These in turn are being replaced by hydrofluorocarbons (HFCs) and other compounds that do not destroy stratospheric ozone at all.

The residual effects of CFCs accumulating within the atmosphere lead to a concentration gradient between the atmosphere and the ocean. This organohalogen compound is able to dissolve into the ocean's surface waters and is able to act as a time-dependent tracer. This tracer helps scientists study ocean circulation by tracing biological, physical and chemical pathways. 

Implications for astronomy

As ozone in the atmosphere prevents most energetic ultraviolet radiation reaching the surface of the Earth, astronomical data in these wavelengths have to be gathered from satellites orbiting above the atmosphere and ozone layer. Most of the light from young hot stars is in the ultraviolet and so study of these wavelengths is important for studying the origins of galaxies. The Galaxy Evolution Explorer, GALEX, is an orbiting ultraviolet space telescope launched on April 28, 2003, which operated until early 2012.

Radio propagation

From Wikipedia, the free encyclopedia

Radio propagation is the behavior of radio waves as they travel, or are propagated, from one point to another in vacuum, or into various parts of the atmosphere. As a form of electromagnetic radiation, like light waves, radio waves are affected by the phenomena of reflection, refraction, diffraction, absorption, polarization, and scattering. Understanding the effects of varying conditions on radio propagation has many practical applications, from choosing frequencies for amateur radio communications, international shortwave broadcasters, to designing reliable mobile telephone systems, to radio navigation, to operation of radar systems.

Several different types of propagation are used in practical radio transmission systems. Line-of-sight propagation means radio waves which travel in a straight line from the transmitting antenna to the receiving antenna. Line of sight transmission is used for medium-distance radio transmission, such as cell phones, cordless phones, walkie-talkies, wireless networks, FM radio, television broadcasting, radar, and satellite communication (such as satellite television). Line-of-sight transmission on the surface of the Earth is limited to the distance to the visual horizon, which depends on the height of transmitting and receiving antennas. It is the only propagation method possible at microwave frequencies and above.

At lower frequencies in the MF, LF, and VLF bands, diffraction allows radio waves to bend over hills and other obstacles, and travel beyond the horizon, following the contour of the Earth. These are called surface waves or ground wave propagation. AM broadcast and amateur radio stations use ground waves to cover their listening areas. As the frequency gets lower, the attenuation with distance decreases, so very low frequency (VLF) and extremely low frequency (ELF) ground waves can be used to communicate worldwide. VLF and ELF waves can penetrate significant distances through water and earth, and these frequencies are used for mine communication and military communication with submerged submarines.

At medium wave and shortwave frequencies (MF and HF bands) radio waves can refract from the ionosphere. This means that medium and short radio waves transmitted at an angle into the sky can be refracted back to Earth at great distances beyond the horizon – even transcontinental distances. This is called skywave propagation. It is used by amateur radio operators to communicate with operators in distant countries, and by shortwave broadcast stations to transmit internationally.

In addition, there are several less common radio propagation mechanisms, such as tropospheric scattering (troposcatter), tropospheric ducting (ducting) at VHF frequencies and near vertical incidence skywave (NVIS) which are used when HF communications are desired within a few hundred miles.

Frequency dependence

At different frequencies, radio waves travel through the atmosphere by different mechanisms or modes:

Radio frequencies and their primary mode of propagation
Band Frequency Wavelength Propagation via
ELF Extremely Low Frequency 3–30 Hz 100,000–10,000 km Guided between the Earth and the D layer of the ionosphere.
SLF Super Low Frequency 30–300 Hz 10,000–1,000 km Guided between the Earth and the ionosphere.
ULF Ultra Low Frequency 0.3–3 kHz
(300–3,000 Hz)
1,000–100 km Guided between the Earth and the ionosphere.
VLF Very Low Frequency 3–30 kHz
(3,000–30,000 Hz)
100–10 km Guided between the Earth and the ionosphere.
LF Low Frequency 30–300 kHz
(30,000–300,000 Hz)
10–1 km Guided between the Earth and the ionosphere.

Ground waves.

MF Medium Frequency 300–3000 kHz
(300,000–3,000,000 Hz)
1000–100 m Ground waves.

E, F layer ionospheric refraction at night, when D layer absorption weakens.

HF High Frequency (Short Wave) 3–30 MHz
(3,000,000–30,000,000 Hz)
100–10 m E layer ionospheric refraction.

F1, F2 layer ionospheric refraction.

VHF Very High Frequency 30–300 MHz
(30,000,000–
    300,000,000 Hz)
10–1 m Line-of-sight propagation.

Infrequent E ionospheric (Es) refraction. Uncommonly F2 layer ionospheric refraction during high sunspot activity up to 50 MHz and rarely to 80 MHz. Sometimes tropospheric ducting or meteor scatter

UHF Ultra High Frequency 300–3000 MHz
(300,000,000–
    3,000,000,000 Hz)
100–10 cm Line-of-sight propagation. Sometimes tropospheric ducting.
SHF Super High Frequency 3–30 GHz
(3,000,000,000–
    30,000,000,000 Hz)
10–1 cm Line-of-sight propagation. Sometimes rain scatter.
EHF Extremely High Frequency 30–300 GHz
(30,000,000,000–
    300,000,000,000 Hz)
10–1 mm Line-of-sight propagation, limited by atmospheric absorption to a few kilometers (miles)
THF Tremendously High frequency 0.3–3 THz
(300,000,000,000–
    3,000,000,000,000 Hz)
1–0.1 mm Line-of-sight propagation, limited by atmospheric absorption to a few meters.

Free space propagation

In free space, all electromagnetic waves (radio, light, X-rays, etc.) obey the inverse-square law which states that the power density of an electromagnetic wave is proportional to the inverse of the square of the distance from a point source or:

At typical communication distances from a transmitter, the transmitting antenna usually can be approximated by a point source. Doubling the distance of a receiver from a transmitter means that the power density of the radiated wave at that new location is reduced to one-quarter of its previous value.

The power density per surface unit is proportional to the product of the electric and magnetic field strengths. Thus, doubling the propagation path distance from the transmitter reduces each of these received field strengths over a free-space path by one-half.

Radio waves in vacuum travel at the speed of light. The Earth's atmosphere is thin enough that radio waves in the atmosphere travel very close to the speed of light, but variations in density and temperature can cause some slight refraction (bending) of waves over distances.

Direct modes (line-of-sight)

Line-of-sight refers to radio waves which travel directly in a line from the transmitting antenna to the receiving antenna. It does not necessarily require a cleared sight path; at lower frequencies radio waves can pass through buildings, foliage and other obstructions. This is the most common propagation mode at VHF and above, and the only possible mode at microwave frequencies and above. On the surface of the Earth, line of sight propagation is limited by the visual horizon to about 40 miles (64 km). This is the method used by cell phones, cordless phones, walkie-talkies, wireless networks, point-to-point microwave radio relay links, FM and television broadcasting and radar. Satellite communication uses longer line-of-sight paths; for example home satellite dishes receive signals from communication satellites 22,000 miles (35,000 km) above the Earth, and ground stations can communicate with spacecraft billions of miles from Earth.

Ground plane reflection effects are an important factor in VHF line-of-sight propagation. The interference between the direct beam line-of-sight and the ground reflected beam often leads to an effective inverse-fourth-power (1distance4) law for ground-plane limited radiation.[citation needed]

Surface modes (groundwave)

Ground Wave Propagation
Ground Wave Propagation

Lower frequency (between 30 and 3,000 kHz) vertically polarized radio waves can travel as surface waves following the contour of the Earth; this is called ground wave propagation.

In this mode the radio wave propagates by interacting with the conductive surface of the Earth. The wave "clings" to the surface and thus follows the curvature of the Earth, so ground waves can travel over mountains and beyond the horizon. Ground waves propagate in vertical polarization so vertical antennas (monopoles) are required. Since the ground is not a perfect electrical conductor, ground waves are attenuated as they follow the Earth's surface. Attenuation is proportional to frequency, so ground waves are the main mode of propagation at lower frequencies, in the MF, LF and VLF bands. Ground waves are used by radio broadcasting stations in the MF and LF bands, and for time signals and radio navigation systems.

At even lower frequencies, in the VLF to ELF bands, an Earth-ionosphere waveguide mechanism allows even longer range transmission. These frequencies are used for secure military communications. They can also penetrate to a significant depth into seawater, and so are used for one-way military communication to submerged submarines.

Early long-distance radio communication (wireless telegraphy) before the mid-1920s used low frequencies in the longwave bands and relied exclusively on ground-wave propagation. Frequencies above 3 MHz were regarded as useless and were given to hobbyists (radio amateurs). The discovery around 1920 of the ionospheric reflection or skywave mechanism made the medium wave and short wave frequencies useful for long-distance communication and they were allocated to commercial and military users.

Non-line-of-sight modes

Ionospheric modes (skywave)

Sky Wave Propagation
Sky Wave Propagation

Skywave propagation, also referred to as skip, is any of the modes that rely on reflection and refraction of radio waves from the ionosphere. The ionosphere is a region of the atmosphere from about 60 to 500 km (37 to 311 mi) that contains layers of charged particles (ions) which can refract a radio wave back toward the Earth. A radio wave directed at an angle into the sky can be reflected back to Earth beyond the horizon by these layers, allowing long-distance radio transmission. The F2 layer is the most important ionospheric layer for long-distance, multiple-hop HF propagation, though F1, E, and D-layers also play significant roles. The D-layer, when present during sunlight periods, causes significant amount of signal loss, as does the E-layer whose maximum usable frequency can rise to 4 MHz and above and thus block higher frequency signals from reaching the F2-layer. The layers, or more appropriately "regions", are directly affected by the sun on a daily diurnal cycle, a seasonal cycle and the 11-year sunspot cycle and determine the utility of these modes. During solar maxima, or sunspot highs and peaks, the whole HF range up to 30 MHz can be used usually around the clock and F2 propagation up to 50 MHz is observed frequently depending upon daily solar flux values. During solar minima, or minimum sunspot counts down to zero, propagation of frequencies above 15 MHz is generally unavailable.

Although the claim is commonly made that two-way HF propagation along a given path is reciprocal, that is, if the signal from location A reaches location B at a good strength, the signal from location B will be similar at station A because the same path is traversed in both directions. However, the ionosphere is far too complex and constantly changing to support the reciprocity theorem. The path is never exactly the same in both directions. In brief, conditions at the two end-points of a path generally cause dissimilar polarization shifts, hence dissimilar splits into ordinary rays and extraordinary rays (Pedersen rays) which have different propagation characteristics due to differences in ionization density, shifting zenith angles, effects of the Earth's magnetic dipole contours, antenna radiation patterns, ground conditions, and other variables.

Forecasting of skywave modes is of considerable interest to amateur radio operators and commercial marine and aircraft communications, and also to shortwave broadcasters. Real-time propagation can be assessed by listening for transmissions from specific beacon transmitters.

Meteor scattering

Meteor scattering relies on reflecting radio waves off the intensely ionized columns of air generated by meteors. While this mode is very short duration, often only from a fraction of second to couple of seconds per event, digital Meteor burst communications allows remote stations to communicate to a station that may be hundreds of miles up to over 1,000 miles (1,600 km) away, without the expense required for a satellite link. This mode is most generally useful on VHF frequencies between 30 and 250 MHz.

Auroral backscatter

Intense columns of Auroral ionization at 100 km (60 mile) altitudes within the auroral oval backscatter radio waves, including those on HF and VHF. Backscatter is angle-sensitive—incident ray vs. magnetic field line of the column must be very close to right-angle. Random motions of electrons spiraling around the field lines create a Doppler-spread that broadens the spectra of the emission to more or less noise-like – depending on how high radio frequency is used. The radio-auroras are observed mostly at high latitudes and rarely extend down to middle latitudes. The occurrence of radio-auroras depends on solar activity (flares, coronal holes, CMEs) and annually the events are more numerous during solar cycle maxima. Radio aurora includes the so-called afternoon radio aurora which produces stronger but more distorted signals and after the Harang-minima, the late-night radio aurora (sub-storming phase) returns with variable signal strength and lesser doppler spread. The propagation range for this predominantly back-scatter mode extends up to about 2000 km (1250 miles) in east–west plane, but strongest signals are observed most frequently from the north at nearby sites on same latitudes.

Rarely, a strong radio-aurora is followed by Auroral-E, which resembles both propagation types in some ways.

Sporadic-E propagation

Sporadic E (Es) propagation occurs on HF and VHF bands. It must not be confused with ordinary HF E-layer propagation. Sporadic-E at mid-latitudes occurs mostly during summer season, from May to August in the northern hemisphere and from November to February in the southern hemisphere. There is no single cause for this mysterious propagation mode. The reflection takes place in a thin sheet of ionization around 90 km (55 miles) height. The ionization patches drift westwards at speeds of few hundred km (miles) per hour. There is a weak periodicity noted during the season and typically Es is observed on 1 to 3 successive days and remains absent for a few days to reoccur again. Es do not occur during small hours; the events usually begin at dawn, and there is a peak in the afternoon and a second peak in the evening. Es propagation is usually gone by local midnight.

Observation of radio propagation beacons operating around 28.2 MHz, 50 MHz and 70 MHz, indicates that maximum observed frequency (MOF) for Es is found to be lurking around 30 MHz on most days during the summer season, but sometimes MOF may shoot up to 100 MHz or even more in ten minutes to decline slowly during the next few hours. The peak-phase includes oscillation of MOF with periodicity of approximately 5...10 minutes. The propagation range for Es single-hop is typically 1000 to 2000 km (600 to 1250 miles), but with multi-hop, double range is observed. The signals are very strong but also with slow deep fading.

Tropospheric modes

Radio waves in the VHF and UHF bands can travel somewhat beyond the visual horizon due to refraction in the troposphere, the bottom layer of the atmosphere below 20 km (12 miles). This is due to changes in the refractive index of air with temperature and pressure. Tropospheric delay is a source of error in radio ranging techniques, such as the Global Positioning System (GPS). In addition, unusual conditions can sometimes allow propagation at greater distances:

Tropospheric ducting

Sudden changes in the atmosphere's vertical moisture content and temperature profiles can on random occasions make UHF, VHF and microwave signals propagate hundreds of kilometers (miles) up to about 2,000 kilometers (1,200 miles)—and for ducting mode even farther—beyond the normal radio-horizon. The inversion layer is mostly observed over high pressure regions, but there are several tropospheric weather conditions which create these randomly occurring propagation modes. Inversion layer's altitude for non-ducting is typically found between 100 and 1,000 meters (330 and 3,280 feet) and for ducting about 500 to 3,000 meters (1,600 to 9,800 feet), and the duration of the events are typically from several hours up to several days. Higher frequencies experience the most dramatic increase of signal strengths, while on low-VHF and HF the effect is negligible. Propagation path attenuation may be below free-space loss. Some of the lesser inversion types related to warm ground and cooler air moisture content occur regularly at certain times of the year and time of day. A typical example could be the late summer, early morning tropospheric enhancements that bring in signals from distances up to few hundred kilometers (miles) for a couple of hours, until undone by the Sun's warming effect.

Tropospheric scattering (troposcatter)

At VHF and higher frequencies, small variations (turbulence) in the density of the atmosphere at a height of around 6 miles (9.7 km) can scatter some of the normally line-of-sight beam of radio frequency energy back toward the ground. In tropospheric scatter (troposcatter) communication systems a powerful beam of microwaves is aimed above the horizon, and a high gain antenna over the horizon aimed at the section of the troposphere though which the beam passes receives the tiny scattered signal. Troposcatter systems can achieve over-the-horizon communication between stations 500 miles (800 km) apart, and the military developed networks such as the White Alice Communications System covering all of Alaska before the 1960s, when communication satellites largely replaced them.

Rain scattering

Rain scattering is purely a microwave propagation mode and is best observed around 10 GHz, but extends down to a few gigahertz—the limit being the size of the scattering particle size vs. wavelength. This mode scatters signals mostly forwards and backwards when using horizontal polarization and side-scattering with vertical polarization. Forward-scattering typically yields propagation ranges of 800 km (500 miles). Scattering from snowflakes and ice pellets also occurs, but scattering from ice without watery surface is less effective. The most common application for this phenomenon is microwave rain radar, but rain scatter propagation can be a nuisance causing unwanted signals to intermittently propagate where they are not anticipated or desired. Similar reflections may also occur from insects though at lower altitudes and shorter range. Rain also causes attenuation of point-to-point and satellite microwave links. Attenuation values up to 30 dB have been observed on 30 GHz during heavy tropical rain.

Airplane scattering

Airplane scattering (or most often reflection) is observed on VHF through microwaves and, besides back-scattering, yields momentary propagation up to 500 km (300 miles) even in mountainous terrain. The most common back-scatter applications are air-traffic radar, bistatic forward-scatter guided-missile and airplane-detecting trip-wire radar, and the US space radar.

Lightning scattering

Lightning scattering has sometimes been observed on VHF and UHF over distances of about 500 km (300 miles). The hot lightning channel scatters radio-waves for a fraction of a second. The RF noise burst from the lightning makes the initial part of the open channel unusable and the ionization disappears quickly because of recombination at low altitude and high atmospheric pressure. Although the hot lightning channel is briefly observable with microwave radar, no practical use for this mode has been found in communications.

Other effects

Diffraction

Knife-edge diffraction is the propagation mode where radio waves are bent around sharp edges. For example, this mode is used to send radio signals over a mountain range when a line-of-sight path is not available. However, the angle cannot be too sharp or the signal will not diffract. The diffraction mode requires increased signal strength, so higher power or better antennas will be needed than for an equivalent line-of-sight path.

Diffraction depends on the relationship between the wavelength and the size of the obstacle. In other words, the size of the obstacle in wavelengths. Lower frequencies diffract around large smooth obstacles such as hills more easily. For example, in many cases where VHF (or higher frequency) communication is not possible due to shadowing by a hill, it is still possible to communicate using the upper part of the HF band where the surface wave is of little use.

Diffraction phenomena by small obstacles are also important at high frequencies. Signals for urban cellular telephony tend to be dominated by ground-plane effects as they travel over the rooftops of the urban environment. They then diffract over roof edges into the street, where multipath propagation, absorption and diffraction phenomena dominate.

Absorption

Low-frequency radio waves travel easily through brick and stone and VLF even penetrates sea-water. As the frequency rises, absorption effects become more important. At microwave or higher frequencies, absorption by molecular resonances in the atmosphere (mostly from water, H2O and oxygen, O2) is a major factor in radio propagation. For example, in the 58–60 GHz band, there is a major absorption peak which makes this band useless for long-distance use. This phenomenon was first discovered during radar research in World War II. Above about 400 GHz, the Earth's atmosphere blocks most of the spectrum while still passing some - up to UV light, which is blocked by ozone - but visible light and some of the near-infrared is transmitted. Heavy rain and falling snow also affect microwave absorption.

Measuring HF propagation

HF propagation conditions can be simulated using radio propagation models, such as the Voice of America Coverage Analysis Program, and realtime measurements can be done using chirp transmitters. For radio amateurs the WSPR mode provides maps with real time propagation conditions between a network of transmitters and receivers. Even without special beacons the realtime propagation conditions can be measured: A worldwide network of receivers decodes morse code signals on amateur radio frequencies in realtime and provides sophisticated search functions and propagation maps for every station received.

Practical effects

The average person can notice the effects of changes in radio propagation in several ways.

In AM broadcasting, the dramatic ionospheric changes that occur overnight in the mediumwave band drive a unique broadcast license scheme in the United States, with entirely different transmitter power output levels and directional antenna patterns to cope with skywave propagation at night. Very few stations are allowed to run without modifications during dark hours, typically only those on clear channels in North America. Many stations have no authorization to run at all outside of daylight hours.

For FM broadcasting (and the few remaining low-band TV stations), weather is the primary cause for changes in VHF propagation, along with some diurnal changes when the sky is mostly without cloud cover. These changes are most obvious during temperature inversions, such as in the late-night and early-morning hours when it is clear, allowing the ground and the air near it to cool more rapidly. This not only causes dew, frost, or fog, but also causes a slight "drag" on the bottom of the radio waves, bending the signals down such that they can follow the Earth's curvature over the normal radio horizon. The result is typically several stations being heard from another media market – usually a neighboring one, but sometimes ones from a few hundred kilometers (miles) away. Ice storms are also the result of inversions, but these normally cause more scattered omnidirection propagation, resulting mainly in interference, often among weather radio stations. In late spring and early summer, a combination of other atmospheric factors can occasionally cause skips that duct high-power signals to places well over 1000 km (600 miles) away.

Non-broadcast signals are also affected. Mobile phone signals are in the UHF band, ranging from 700 to over 2600 MHz, a range which makes them even more prone to weather-induced propagation changes. In urban (and to some extent suburban) areas with a high population density, this is partly offset by the use of smaller cells, which use lower effective radiated power and beam tilt to reduce interference, and therefore increase frequency reuse and user capacity. However, since this would not be very cost-effective in more rural areas, these cells are larger and so more likely to cause interference over longer distances when propagation conditions allow.

While this is generally transparent to the user thanks to the way that cellular networks handle cell-to-cell handoffs, when cross-border signals are involved, unexpected charges for international roaming may occur despite not having left the country at all. This often occurs between southern San Diego and northern Tijuana at the western end of the U.S./Mexico border, and between eastern Detroit and western Windsor along the U.S./Canada border. Since signals can travel unobstructed over a body of water far larger than the Detroit River, and cool water temperatures also cause inversions in surface air, this "fringe roaming" sometimes occurs across the Great Lakes, and between islands in the Caribbean. Signals can skip from the Dominican Republic to a mountainside in Puerto Rico and vice versa, or between the U.S. and British Virgin Islands, among others. While unintended cross-border roaming is often automatically removed by mobile phone company billing systems, inter-island roaming is typically not.

Empirical models

A radio propagation model, also known as the radio wave propagation model or the radio frequency propagation model, is an empirical mathematical formulation for the characterization of radio wave propagation as a function of frequency, distance and other conditions. A single model is usually developed to predict the behavior of propagation for all similar links under similar constraints. Created with the goal of formalizing the way radio waves are propagated from one place to another, such models typically predict the path loss along a link or the effective coverage area of a transmitter.

As the path loss encountered along any radio link serves as the dominant factor for characterization of propagation for the link, radio propagation models typically focus on realization of the path loss with the auxiliary task of predicting the area of coverage for a transmitter or modeling the distribution of signals over different regions

Because each individual telecommunication link has to encounter different terrain, path, obstructions, atmospheric conditions and other phenomena, it is intractable to formulate the exact loss for all telecommunication systems in a single mathematical equation. As a result, different models exist for different types of radio links under different conditions. The models rely on computing the median path loss for a link under a certain probability that the considered conditions will occur.

Radio propagation models are empirical in nature, which means, they are developed based on large collections of data collected for the specific scenario. For any model, the collection of data has to be sufficiently large to provide enough likeliness (or enough scope) to all kind of situations that can happen in that specific scenario. Like all empirical models, radio propagation models do not point out the exact behavior of a link, rather, they predict the most likely behavior the link may exhibit under the specified conditions.

Different models have been developed to meet the needs of realizing the propagation behavior in different conditions. Types of models for radio propagation include:

Models for free space attenuation
Models for outdoor attenuation
Models for indoor attenuation

Lie point symmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_point_symmetry     ...