Search This Blog

Tuesday, May 6, 2025

Photonics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Photonics
Dispersion of light (photons) by a prism

Photonics is a branch of optics that involves the application of generation, detection, and manipulation of light in the form of photons through emission, transmission, modulation, signal processing, switching, amplification, and sensing.

Photonics is closely related to quantum electronics, where quantum electronics deals with the theoretical part of it while photonics deal with its engineering applications. Though covering all light's technical applications over the whole spectrum, most photonic applications are in the range of visible and near-infrared light.

The term photonics developed as an outgrowth of the first practical semiconductor light emitters invented in the early 1960s and optical fibers developed in the 1970s.

History

The word 'Photonics' is derived from the Greek word "phos" meaning light (which has genitive case "photos" and in compound words the root "photo-" is used); it appeared in the late 1960s to describe a research field whose goal was to use light to perform functions that traditionally fell within the typical domain of electronics, such as telecommunications, information processing, etc.

An early instance of the word was in a December 1954 letter from John W. Campbell to Gotthard Gunther:

Incidentally, I’ve decided to invent a new science — photonics. It bears the same relationship to Optics that electronics does to electrical engineering. Photonics, like electronics, will deal with the individual units; optics and EE deal with the group-phenomena! And note that you can do things with electronics that are impossible in electrical engineering!

Photonics as a field began with the invention of the maser and laser in 1958 to 1960. Other developments followed: the laser diode in the 1970s, optical fibers for transmitting information, and the erbium-doped fiber amplifier. These inventions formed the basis for the telecommunications revolution of the late 20th century and provided the infrastructure for the Internet.

Though coined earlier, the term photonics came into common use in the 1980s as fiber-optic data transmission was adopted by telecommunications network operators. At that time, the term was used widely at Bell Laboratories. Its use was confirmed when the IEEE Lasers and Electro-Optics Society established an archival journal named Photonics Technology Letters at the end of the 1980s.

During the period leading up to the dot-com crash circa 2001, photonics was a field focused largely on optical telecommunications. However, photonics covers a huge range of science and technology applications, including laser manufacturing, biological and chemical sensing, medical diagnostics and therapy, display technology, and optical computing. Further growth of photonics is likely if current silicon photonics developments are successful.

Relationship to other fields

Classical optics

Photonics is closely related to optics. Classical optics long preceded the discovery that light is quantized, when Albert Einstein famously explained the photoelectric effect in 1905. Optics tools include the refracting lens, the reflecting mirror, and various optical components and instruments developed throughout the 15th to 19th centuries. Key tenets of classical optics, such as Huygens Principle, developed in the 17th century, Maxwell's Equations and the wave equations, developed in the 19th, do not depend on quantum properties of light.

Modern optics

Photonics is related to quantum optics, optomechanics, electro-optics, optoelectronics and quantum electronics. However, each area has slightly different connotations by scientific and government communities and in the marketplace. Quantum optics often connotes fundamental research, whereas photonics is used to connote applied research and development.

The term photonics more specifically connotes:

  • The particle properties of light,
  • The potential of creating signal processing device technologies using photons,
  • The practical application of optics, and
  • An analogy to electronics.

The term optoelectronics connotes devices or circuits that comprise both electrical and optical functions, i.e., a thin-film semiconductor device. The term electro-optics came into earlier use and specifically encompasses nonlinear electrical-optical interactions applied, e.g., as bulk crystal modulators such as the Pockels cell, but also includes advanced imaging sensors.

An important aspect in the modern definition of Photonics is that there is not necessarily a widespread agreement in the perception of the field boundaries. Following a source on optics.org, the response of a query from the publisher of Journal of Optics: A Pure and Applied Physics to the editorial board regarding streamlining the name of the journal reported significant differences in the way the terms "optics" and "photonics" describe the subject area, with some description proposing that "photonics embraces optics". In practice, as the field evolves, evidences that "modern optics" and Photonics are often used interchangeably are very diffused and absorbed in the scientific jargon.

Emerging fields

Photonics also relates to the emerging science of quantum information and quantum optics. Other emerging fields include:

Applications

A sea mouse (Aphrodita aculeata), showing colorful spines, a remarkable example of photonic engineering by a living organism

Applications of photonics are ubiquitous. Included are all areas from everyday life to the most advanced science, e.g. light detection, telecommunications, information processing, photovoltaics, photonic computing, lighting, metrology, spectroscopy, holography, medicine (surgery, vision correction, endoscopy, health monitoring), biophotonics, military technology, laser material processing, art diagnostics (involving infrared reflectography, X-rays, ultraviolet fluorescence, XRF), agriculture, and robotics.

Just as applications of electronics have expanded dramatically since the first transistor was invented in 1948, the unique applications of photonics continue to emerge. Economically important applications for semiconductor photonic devices include optical data recording, fiber optic telecommunications, laser printing (based on xerography), displays, and optical pumping of high-power lasers. The potential applications of photonics are virtually unlimited and include chemical synthesis, medical diagnostics, on-chip data communication, sensors, laser defense, and fusion energy, to name several interesting additional examples.

Microphotonics and nanophotonics usually includes photonic crystals and solid state devices.

Overview of photonics research

The science of photonics includes investigation of the emission, transmission, amplification, detection, and modulation of light.

Light sources

Photonics commonly uses semiconductor-based light sources, such as light-emitting diodes (LEDs), superluminescent diodes, and lasers. Other light sources include single photon sources, fluorescent lamps, cathode-ray tubes (CRTs), and plasma screens. Note that while CRTs, plasma screens, and organic light-emitting diode displays generate their own light, liquid crystal displays (LCDs) like TFT screens require a backlight of either cold cathode fluorescent lamps or, more often today, LEDs.

Characteristic for research on semiconductor light sources is the frequent use of III-V semiconductors instead of the classical semiconductors like silicon and germanium. This is due to the special properties of III-V semiconductors that allow for the implementation of light emitting devices. Examples for material systems used are gallium arsenide (GaAs) and aluminium gallium arsenide (AlGaAs) or other compound semiconductors. They are also used in conjunction with silicon to produce hybrid silicon lasers.

Transmission media

Light can be transmitted through any transparent medium. Glass fiber or plastic optical fiber can be used to guide the light along a desired path. In optical communications optical fibers allow for transmission distances of more than 100 km without amplification depending on the bit rate and modulation format used for transmission. A very advanced research topic within photonics is the investigation and fabrication of special structures and "materials" with engineered optical properties. These include photonic crystals, photonic crystal fibers and metamaterials.

Amplifiers

Optical amplifiers are used to amplify an optical signal. Optical amplifiers used in optical communications are erbium-doped fiber amplifiers, semiconductor optical amplifiers, Raman amplifiers and optical parametric amplifiers. A very advanced research topic on optical amplifiers is the research on quantum dot semiconductor optical amplifiers.

Detection

Photodetectors detect light. Photodetectors range from very fast photodiodes for communications applications over medium speed charge coupled devices (CCDs) for digital cameras to very slow solar cells that are used for energy harvesting from sunlight. There are also many other photodetectors based on thermal, chemical, quantum, photoelectric and other effects.

Modulation

Modulation of a light source is used to encode information on a light source. Modulation can be achieved by the light source directly. One of the simplest examples is to use a flashlight to send Morse code. Another method is to take the light from a light source and modulate it in an external optical modulator.

An additional topic covered by modulation research is the modulation format. On-off keying has been the commonly used modulation format in optical communications. In the last years more advanced modulation formats like phase-shift keying or even orthogonal frequency-division multiplexing have been investigated to counteract effects like dispersion that degrade the quality of the transmitted signal.

Photonic systems

Photonics also includes research on photonic systems. This term is often used for optical communication systems. This area of research focuses on the implementation of photonic systems like high speed photonic networks. This also includes research on optical regenerators, which improve optical signal quality.

Photonic integrated circuits

Photonic integrated circuits (PICs) are optically active integrated semiconductor photonic devices. The leading commercial application of PICs are optical transceivers for data center optical networks. PICs fabricated on III-V indium phosphide semiconductor wafer substrates were the first to achieve commercial success; PICs based on silicon wafer substrates are now also a commercialized technology.

Key Applications for Integrated Photonics include:

  • Data Center Interconnects: Data centers continue to grow in scale as companies and institutions store and process more information in the cloud. With the increase in data center compute, the demands on data center networks correspondingly increase. Optical cables can support greater lane bandwidth at longer transmission distances than copper cables. For short-reach distances and up to 40 Gbit/s data transmission rates, non-integrated approaches such as vertical-cavity surface-emitting lasers can be used for optical transceivers on multi-mode optical fiber networks. Beyond this range and bandwidth, photonic integrated circuits are key to enable high-performance, low-cost optical transceivers.
  • Analog RF Signal Applications: Using the GHz precision signal processing of photonic integrated circuits, radiofrequency (RF) signals can be manipulated with high fidelity to add or drop multiple channels of radio, spread across an ultra-broadband frequency range. In addition, photonic integrated circuits can remove background noise from an RF signal with unprecedented precision, which will increase the signal to noise performance and make possible new benchmarks in low power performance. Taken together, this high precision processing enables us to now pack large amounts of information into ultra-long-distance radio communications. 
  • Sensors: Photons can also be used to detect and differentiate the optical properties of materials. They can identify chemical or biochemical gases from air pollution, organic produce, and contaminants in the water. They can also be used to detect abnormalities in the blood, such as low glucose levels, and measure biometrics such as pulse rate. Photonic integrated circuits are being designed as comprehensive and ubiquitous sensors with glass/silicon, and embedded via high-volume production in various mobile devices.  Mobile platform sensors are enabling us to more directly engage with practices that better protect the environment, monitor food supply and keep us healthy.
  • LIDAR and other phased array imaging: Arrays of PICs can take advantage of phase delays in the light reflected from objects with three-dimensional shapes to reconstruct 3D images, and Light Imaging, Detection and Ranging (LIDAR) with laser light can offer a complement to radar by providing precision imaging (with 3D information) at close distances. This new form of machine vision is having an immediate application in driverless cars to reduce collisions, and in biomedical imaging. Phased arrays can also be used for free-space communications and novel display technologies. Current versions of LIDAR predominantly rely on moving parts, making them large, slow, low resolution, costly, and prone to mechanical vibration and premature failure. Integrated photonics can realize LIDAR within a footprint the size of a postage stamp, scan without moving parts, and be produced in high volume at low cost.

Biophotonics

Biophotonics employs tools from the field of photonics to the study of biology. Biophotonics mainly focuses on improving medical diagnostic abilities (for example for cancer or infectious diseases) but can also be used for environmental or other applications. The main advantages of this approach are speed of analysis, non-invasive diagnostics, and the ability to work in-situ.

Electronics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Electronics
Modern surface-mount electronic components on a printed circuit board, with a large integrated circuit at the top

Electronics is a scientific and engineering discipline that studies and applies the principles of physics to design, create, and operate devices that manipulate electrons and other electrically charged particles. It is a subfield of physics and electrical engineering which uses active devices such as transistors, diodes, and integrated circuits to control and amplify the flow of electric current and to convert it from one form to another, such as from alternating current (AC) to direct current (DC) or from analog signals to digital signals.

Electronic devices have significantly influenced the development of many aspects of modern society, such as telecommunications, entertainment, education, health care, industry, and security. The main driving force behind the advancement of electronics is the semiconductor industry, which continually produces ever-more sophisticated electronic devices and circuits in response to global demand. The semiconductor industry is one of the global economy's largest and most profitable sectors, with annual revenues exceeding $481 billion in 2018. The electronics industry also encompasses other sectors that rely on electronic devices and systems, such as e-commerce, which generated over $29 trillion in online sales in 2017.

History and development

One of the earliest Audion radio receivers, constructed by De Forest in 1914

Karl Ferdinand Braun´s development of the crystal detector, the first semiconductor device, in 1874 and the identification of the electron in 1897 by Sir Joseph John Thomson, along with the subsequent invention of the vacuum tube which could amplify and rectify small electrical signals, inaugurated the field of electronics and the electron age. Practical applications started with the invention of the diode by Ambrose Fleming and the triode by Lee De Forest in the early 1900s, which made the detection of small electrical voltages, such as radio signals from a radio antenna, practicable.

Vacuum tubes (thermionic valves) were the first active electronic components which controlled current flow by influencing the flow of individual electrons, and enabled the construction of equipment that used current amplification and rectification to give us radio, television, radar, long-distance telephony and much more. The early growth of electronics was rapid, and by the 1920s, commercial radio broadcasting and telecommunications were becoming widespread and electronic amplifiers were being used in such diverse applications as long-distance telephony and the music recording industry.

The next big technological step took several decades to appear, when the first working point-contact transistor was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947. However, vacuum tubes continued to play a leading role in the field of microwave and high power transmission as well as television receivers until the middle of the 1980s. Since then, solid-state devices have all but completely taken over. Vacuum tubes are still used in some specialist applications such as high power RF amplifiers, cathode-ray tubes, specialist audio equipment, guitar amplifiers and some microwave devices.

In April 1955, the IBM 608 was the first IBM product to use transistor circuits without any vacuum tubes and is believed to be the first all-transistorized calculator to be manufactured for the commercial market. The 608 contained more than 3,000 germanium transistors. Thomas J. Watson Jr. ordered all future IBM products to use transistors in their design. From that time on transistors were almost exclusively used for computer logic circuits and peripheral devices. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications.

The MOSFET was invented at Bell Labs between 1955 and 1960. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. Its advantages include high scalability, affordability, low power consumption, and high density. It revolutionized the electronics industry, becoming the most widely used electronic device in the world. The MOSFET is the basic element in most modern electronic equipment.

As the complexity of circuits grew, problems arose. One problem was the size of the circuit. A complex circuit like a computer was dependent on speed. If the components were large, the wires interconnecting them must be long. The electric signals took time to go through the circuit, thus slowing the computer. The invention of the integrated circuit by Jack Kilby and Robert Noyce solved this problem by making all the components and the chip out of the same block (monolith) of semiconductor material. The circuits could be made smaller, and the manufacturing process could be automated. This led to the idea of integrating all components on a single-crystal silicon wafer, which led to small-scale integration (SSI) in the early 1960s, and then medium-scale integration (MSI) in the late 1960s, followed by VLSI. In 2008, billion-transistor processors became commercially available.

Subfields

Devices and components

Various electronic components

An electronic component is any component in an electronic system either active or passive. Components are connected together, usually by being soldered to a printed circuit board (PCB), to create an electronic circuit with a particular function. Components may be packaged singly, or in more complex groups as integrated circuits. Passive electronic components are capacitors, inductors, resistors, whilst active components are such as semiconductor devices; transistors and thyristors, which control current flow at electron level.

Types of circuits

Electronic circuit functions can be divided into two function groups: analog and digital. A particular device may consist of circuitry that has either or a mix of the two types. Analog circuits are becoming less common, as many of their functions are being digitized.

Analog circuits

Analog circuits use a continuous range of voltage or current for signal processing, as opposed to the discrete levels used in digital circuits. Analog circuits were common throughout an electronic device in the early years in devices such as radio receivers and transmitters. Analog electronic computers were valuable for solving problems with continuous variables until digital processing advanced.

As semiconductor technology developed, many of the functions of analog circuits were taken over by digital circuits, and modern circuits that are entirely analog are less common; their functions being replaced by hybrid approach which, for instance, uses analog circuits at the front end of a device receiving an analog signal, and then use digital processing using microprocessor techniques thereafter.

Sometimes it may be difficult to classify some circuits that have elements of both linear and non-linear operation. An example is the voltage comparator which receives a continuous range of voltage but only outputs one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier can take on the characteristics of a controlled switch, having essentially two levels of output.

Analog circuits are still widely used for signal amplification, such as in the entertainment industry, and conditioning signals from analog sensors, such as in industrial measurement and control.

Digital circuits

Digital circuits are electric circuits based on discrete voltage levels. Digital circuits use Boolean algebra and are the basis of all digital computers and microprocessor devices. They range from simple logic gates to large integrated circuits, employing millions of such gates.

Digital circuits use a binary system with two voltage levels labelled "0" and "1" to indicated logical status. Often logic "0" will be a lower voltage and referred to as "Low" while logic "1" is referred to as "High". However, some systems use the reverse definition ("0" is "High") or are current based. Quite often the logic designer may reverse these definitions from one circuit to the next as they see fit to facilitate their design. The definition of the levels as "0" or "1" is arbitrary.

Ternary (with three states) logic has been studied, and some prototype computers made, but have not gained any significant practical acceptance. Universally, Computers and Digital signal processors are constructed with digital circuits using Transistors such as MOSFETs in the electronic logic gates to generate binary states.

A selection of logic gates, used extensively in digital electronics

Highly integrated devices:

Design

Electronic systems design deals with the multi-disciplinary design issues of complex electronic devices and systems, such as mobile phones and computers. The subject covers a broad spectrum, from the design and development of an electronic system (new product development) to assuring its proper function, service life and disposal. Electronic systems design is therefore the process of defining and developing complex electronic devices to satisfy specified requirements of the user.

Due to the complex nature of electronics theory, laboratory experimentation is an important part of the development of electronic devices. These experiments are used to test or verify the engineer's design and detect errors. Historically, electronics labs have consisted of electronics devices and equipment located in a physical space, although in more recent years the trend has been towards electronics lab simulation software, such as CircuitLogix, Multisim, and PSpice.

Computer-aided design

Today's electronics engineers have the ability to design circuits using premanufactured building blocks such as power supplies, semiconductors (i.e. semiconductor devices, such as transistors), and integrated circuits. Electronic design automation software programs include schematic capture programs and printed circuit board design programs. Popular names in the EDA software world are NI Multisim, Cadence (ORCAD), EAGLE PCB and Schematic, Mentor (PADS PCB and LOGIC Schematic), Altium (Protel), LabCentre Electronics (Proteus), gEDA, KiCad and many others.

Negative qualities

Thermal management

Heat generated by electronic circuitry must be dissipated to prevent immediate failure and improve long term reliability. Heat dissipation is mostly achieved by passive conduction/convection. Means to achieve greater dissipation include heat sinks and fans for air cooling, and other forms of computer cooling such as water cooling. These techniques use convection, conduction, and radiation of heat energy.

Noise

Electronic noise is defined as unwanted disturbances superposed on a useful signal that tend to obscure its information content. Noise is not the same as signal distortion caused by a circuit. Noise is associated with all electronic circuits. Noise may be electromagnetically or thermally generated, which can be decreased by lowering the operating temperature of the circuit. Other types of noise, such as shot noise cannot be removed as they are due to limitations in physical properties.

Packaging methods

Many different methods of connecting components have been used over the years. For instance, early electronics often used point to point wiring with components attached to wooden breadboards to construct circuits. Cordwood construction and wire wrap were other methods used. Most modern day electronics now use printed circuit boards made of materials such as FR4, or the cheaper (and less hard-wearing) Synthetic Resin Bonded Paper (SRBP, also known as Paxoline/Paxolin (trade marks) and FR2) – characterised by its brown colour. Health and environmental concerns associated with electronics assembly have gained increased attention in recent years, especially for products destined to go to European markets.

Through-hole devices mounted on the circuit board of a mid-1980s home computer. Axial-lead devices are at upper left, while blue radial-lead capacitors are at upper right.

Electrical components are generally mounted in the following ways:

Industry

The electronics industry consists of various sectors. The central driving force behind the entire electronics industry is the semiconductor industry sector, which has annual sales of over $481 billion as of 2018. The largest industry sector is e-commerce, which generated over $29 trillion in 2017. The most widely manufactured electronic device is the metal-oxide-semiconductor field-effect transistor (MOSFET), with an estimated 13 sextillion MOSFETs having been manufactured between 1960 and 2018. In the 1960s, U.S. manufacturers were unable to compete with Japanese companies such as Sony and Hitachi who could produce high-quality goods at lower prices. By the 1980s, however, U.S. manufacturers became the world leaders in semiconductor development and assembly.

However, during the 1990s and subsequently, the industry shifted overwhelmingly to East Asia (a process begun with the initial movement of microchip mass-production there in the 1970s), as plentiful, cheap labor, and increasing technological sophistication, became widely available there.

Over three decades, the United States' global share of semiconductor manufacturing capacity fell, from 37% in 1990, to 12% in 2022. America's pre-eminent semiconductor manufacturer, Intel Corporation, fell far behind its subcontractor Taiwan Semiconductor Manufacturing Company (TSMC) in manufacturing technology.

By that time, Taiwan had become the world's leading source of advanced semiconductors—followed by South Korea, the United States, Japan, Singapore, and China.

Important semiconductor industry facilities (which often are subsidiaries of a leading producer based elsewhere) also exist in Europe (notably the Netherlands), Southeast Asia, South America, and Israel.

Relative biological effectiveness

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Relative_biological_effectiveness

In radiobiology, the relative biological effectiveness (often abbreviated as RBE) is the ratio of biological effectiveness of one type of ionizing radiation relative to another, given the same amount of absorbed energy. The RBE is an empirical value that varies depending on the type of ionizing radiation, the energies involved, the biological effects being considered such as cell death, and the oxygen tension of the tissues or so-called oxygen effect.

Application

The absorbed dose can be a poor indicator of the biological effect of radiation, as the biological effect can depend on many other factors, including the type of radiation, energy, and type of tissue. The relative biological effectiveness can help give a better measure of the biological effect of radiation. The relative biological effectiveness for radiation of type R on a tissue is defined as the ratio

where DX is a reference absorbed dose of radiation of a standard type X, and DR is the absorbed dose of radiation of type R that causes the same amount of biological damage. Both doses are quantified by the amount of energy absorbed in the cells.

Different types of radiation have different biological effectiveness mainly because they transfer their energy to the tissue in different ways. Photons and beta particles have a low linear energy transfer (LET) coefficient, meaning that they ionize atoms in the tissue that are spaced by several hundred nanometers (several tenths of a micrometer) apart, along their path. In contrast, the much more massive alpha particles and neutrons leave a denser trail of ionized atoms in their wake, spaced about one tenth of a nanometer apart (i.e., less than one-thousandth of the typical distance between ionizations for photons and beta particles).

RBEs can be used for either cancer/hereditary risks (stochastic) or for harmful tissue reactions (deterministic) effects. Tissues have different RBEs depending on the type of effect. For high LET radiation (i.e., alphas and neutrons), the RBEs for deterministic effects tend to be lower than those for stochastic effects.

The concept of RBE is relevant in medicine, such as in radiology and radiotherapy, and to the evaluation of risks and consequences of radioactive contamination in various contexts, such as nuclear power plant operation, nuclear fuel disposal and reprocessing, nuclear weapons, uranium mining, and ionizing radiation safety.

Relation to radiation weighting factors (WR)

ICRP Protection Dose quantities in SI units

For the purposes of computing the equivalent dose to an organ or tissue, the International Commission on Radiological Protection (ICRP) has defined a standard set of radiation weighting factors (WR), formerly termed the quality factor (Q). The radiation weighting factors convert absorbed dose (measured in SI units of grays or non-SI rads) into formal biological equivalent dose for radiation exposure (measured in units of sieverts or rem). However, ICRP states:

"The quantities equivalent dose and effective dose should not be used to quantify higher radiation doses or to make decisions on the need for any treatment related to tissue reactions [i.e., deterministic effects]. For such purposes, doses should be evaluated in terms of absorbed dose (in gray, Gy), and where high-LET radiations (e.g., neutrons or alpha particles) are involved, an absorbed dose, weighted with an appropriate RBE, should be used"

Radiation weighting factors are largely based on the RBE of radiation for stochastic health risks. However, for simplicity, the radiation weighting factors are not dependent on the type of tissue, and the values are conservatively chosen to be greater than the bulk of experimental values observed for the most sensitive cell types, with respect to external (external to the cell) sources. Radiation weighting factors have not been developed for internal sources of heavy ions, such as a recoil nucleus.

The ICRP 2007 standard values for relative effectiveness are given below. The higher radiation weighting factor for a type of radiation, the more damaging it is, and this is incorporated into the calculation to convert from gray to sievert units.

The radiation weighting factor for neutrons has been revised over time and remains controversial.
Radiation Energy WR (formerly Q)
x-rays, gamma rays, beta particles, muons 1
neutrons (< 1 MeV) 2.5 + 18.2e-[ln(E)]2/6
neutrons (1 - 50 MeV) 5.0 + 17.0e-[ln(2E)]2/6
neutrons (> 50 MeV) 2.5 + 3.25e-[ln(0.04E)]2/6
protons, charged pions 2
alpha particles, nuclear fission products, heavy nuclei 20

Radiation weighting factors that go from physical energy to biological effect must not be confused with tissue weighting factors. The tissue weighting factors are used to convert an equivalent dose to a given tissue in the body, to an effective dose, a number that provides an estimation of total danger to the whole organism, as a result of the radiation dose to part of the body.

Experimental methods

Data for CHO-K1 cell line irradiated by photons (blue curve) and by carbon ions (red curve). The RBE is given by .

Typically the evaluation of relative biological effectiveness is done on various types of living cells grown in culture medium, including prokaryotic cells such as bacteria, simple eukaryotic cells such as single celled plants, and advanced eukaryotic cells derived from organisms such as rats. By irradiating batches of cells with different doses and types of radiation, a relationship between dose and the fraction of cells that die can be found, and then used to find the doses corresponding to some common survival rate. The ratio of these doses is the RBE of R. Instead of death, the endpoint might be the fraction of cells that become unable to undergo mitotic division (or, for bacteria, binary fission), thus being effectively sterilized — even if they can still carry out other cellular functions.

The types R of ionizing radiation most considered in RBE evaluation are X-rays and gamma radiation (both consisting of photons), alpha radiations (helium-4 nuclei), beta radiation (electrons and positrons), neutron radiation, and heavy nuclei, including the fragments of nuclear fission. For some kinds of radiation, the RBE is strongly dependent on the energy of the individual particles.

Dependence on tissue type

Early on it was found that X-rays, gamma rays, and beta radiation were essentially equivalent for all cell types. Therefore, the standard radiation type X is generally an X-ray beam with 250 keV photons or cobalt-60 gamma rays. As a result, the relative biological effectiveness of beta and photon radiation is essentially 1.

For other radiation types, the RBE is not a well-defined physical quantity, since it varies somewhat with the type of tissue and with the precise place of absorption within the cell. Thus, for example, the RBE for alpha radiation is 2–3 when measured on bacteria, 4–6 for simple eukaryotic cells, and 6–8 for higher eukaryotic cells. According to one source it may be much higher (6500 with X rays as the reference) on ovocytes. The RBE of neutrons is 4–6 for bacteria, 8–12 for simple eukaryotic cells, and 12–16 for higher eukaryotic cells.

Dependence on source location

In the early experiments, the sources of radiation were all external to the cells that were irradiated. However, since alpha particles cannot traverse the outermost dead layer of human skin, they can do significant damage only if they come from the decay of atoms inside the body. Since the range of an alpha particle is typically about the diameter of a single eukaryotic cell, the precise location of the emitting atom in the tissue cells becomes significant.

For this reason, it has been suggested that the health impact of contamination by alpha emitters might have been substantially underestimated. Measurements of RBE with external sources also neglect the ionization caused by the recoil of the parent-nucleus due to the alpha decay. While the recoil of the parent-nucleus of the decaying atom typically carries only about 2% of the energy of the alpha-particle that is emitted by the decaying atom, its range is extremely short (about 2–3 angstroms), due to its high electric charge and high mass. The parent nucleus is required to recoil, upon emission of an alpha particle, with a discrete kinetic energy due to conservation of momentum. Thus, all of the ionization energy from the recoil-nucleus is deposited in an extremely small volume near its original location, typically in the cell nucleus on the chromosomes, which have an affinity for heavy metals. The bulk of studies, using sources that are external to the cell, have yielded RBEs between 10 and 20. Since most of the ionization damage from the travel of the alpha particle is deposited in the cytoplasm, whereas from the travel of the recoil-nucleus is on the DNA itself, it is likely greater damage is caused by the recoil nucleus than by the alpha particle itself.

History

In 1931, Failla and Henshaw reported on determination of the relative biological effectiveness (RBE) of x rays and γ rays. This appears to be the first use of the term ‘RBE’. The authors noted that RBE was dependent on the experimental system being studied. Somewhat later, it was pointed out by Zirkle et al. (1952) that the biological effectiveness depends on the spatial distribution of the energy imparted and the density of ionisations per unit path length of the ionising particles. Zirkle et al. coined the term ‘linear energy transfer (LET)’ to be used in radiobiology for the stopping power, i.e. the energy loss per unit path length of a charged particle. The concept was introduced in the 1950s, at a time when the deployment of nuclear weapons and nuclear reactors spurred research on the biological effects of artificial radioactivity. It had been noticed that those effects depended both on the type and energy spectrum of the radiation, and on the kind of living tissue. The first systematic experiments to determine the RBE were conducted in that decade.

Radiophobia

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Radiophobia
Radiation need not be feared, but it must command your respect.
Health physics poster exhorting respect for—rather than fear of—radiation. (ORNL, 1947)

Radiophobia is an irrational or excessive fear of ionizing radiation, leading to overestimating the health risks of radiation compared to other risks. It can impede rational decision-making and contribute to counter-productive behavior and policies. Radiophobia is primarily a social phenomenon as opposed to a purely psychological dynamic. The term is also used to describe the opposition to the use of nuclear technology (i.e. nuclear power) arising from concerns disproportionately greater than actual risks would merit.

Early use

The term was used in a paper entitled "Radio-phobia and radio-mania" presented by Dr Albert Soiland of Los Angeles in 1903. In the 1920s, the term was used to describe people who were afraid of radio broadcasting and receiving technology. In 1931, radiophobia was referred to in The Salt Lake Tribune as a "fear of loudspeakers", an affliction that Joan Crawford was reported as suffering. The term "radiophobia" was also printed in Australian newspapers in the 1930s and 1940s, assuming a similar meaning. The 1949 poem by Margarent Mercia Baker entitled "Radiophobia" laments the intrusion of advertising into radio broadcasts. The term remained in use with its original association with radios and radio broadcasting during the 1940s and 1950s.

During the 1950s and 1960s, the Science Service associated the term with fear of gamma radiation and the medical use of x-rays. A Science Service article published in several American newspapers proposed that "radiophobia" could be attributed to the publication of information regarding the "genetic hazards" of exposure to ionising radiation by the National Academy of Sciences in 1956.

In a newspaper column published in 1970, Dr Harold Pettit MD wrote:

"A healthy respect for the hazards of radiation is desirable. When atomic testing began in the early 1950s, these hazards were grossly exaggerated, producing a new psychological disorder which has been called "radiophobia" or "nuclear neurosis".

Castle Bravo and its influence on public perception

On March 1, 1954, the operation Castle Bravo, testing a first-of-its-kind experimental thermonuclear Shrimp device, overshot its predicted TNT equivalent yield of 4–6 Mt and instead produced 15 Mt. This produced an unanticipated amount of Bikini snow or visible particles of nuclear fallout, which caught in its plume the Japanese fishing boat the Daigo Fukuryū Maru or Lucky Dragon outside the initially predicted ~5 Mt fallout area cordoned off for Castle Bravo. Approximately 2 weeks after the test and fallout exposure, the 23-member fishing crew began to fall ill with acute radiation sickness, largely brought on by beta burns caused by the direct contact their bare hands had scooping the Bikini snow into bags. Kuboyama Aikichi, the boat's chief radioman, died 7 months later, on September 23, 1954. It was later estimated that about a hundred fishing boats were contaminated to some degree by fallout from the test. Inhabitants of the Marshall Islands were also exposed to fallout, and a number of islands had to be evacuated.

This incident, due to the era of secrecy around nuclear weapons, created widespread fear of uncontrolled and unpredictable nuclear weapons, and also of radioactively contaminated fish affecting the Japanese food supply. With the publication of Joseph Rotblat's findings that the contamination caused by the fallout from the Castle Bravo test was nearly a thousand times greater than that stated officially, outcry in Japan reached such a level that the incident was dubbed by some as "a second Hiroshima". To prevent the subsequent strong anti-nuclear movement from turning into an anti-American movement, the Japanese and U.S. governments agreed on compensation of 2 million dollars for the contaminated fishery, with the surviving 22 crew men receiving about ¥2 million each ($5,556 in 1954, $65,000 in 2025).

The surviving crew members, and their family, would later experience prejudice and discrimination, as local people thought that radiation was contagious.

The Castle Bravo test and the new fears of radioactive fallout inspired a new direction in art and cinema. The Godzilla films, beginning with Ishirō Honda's landmark 1954 film Gojira, are strong metaphors for post-war radiophobia. The opening scene of Gojira echoes the story of the Daigo Fukuryū Maru, from the initial distant flash of light to survivors being found with radiation burns. Although he found the special effects unconvincing, Roger Ebert stated that the film was "an important one" and "properly decoded, was the Fahrenheit 9/11 of its time."

A year after the Castle Bravo test, Akira Kurosawa examined one person's unreasoning terror of radiation and nuclear war in his 1955 film I Live in Fear. At the end of the film, the foundry worker who lives in fear has been declared incompetent by his family, but the possible partial validity of his fears has transferred over to his doctor.

Nevil Shute's 1957 novel On the Beach depicts a future just six years later, based on the premise that a nuclear war has released so much radioactive fallout that all life in the Northern Hemisphere has been killed. The novel is set in Australia, which, along with the rest of the Southern Hemisphere, awaits a similar and inevitable fate. Helen Caldicott describes reading the novel in adolescence as 'a formative event' in her becoming part of the anti-nuclear movement.

Radiophobia and Chernobyl

In the former Soviet Union, many patients with negligible radioactive exposure after the Chernobyl disaster displayed extreme anxiety about low level radiation exposure; they developed many psychosomatic problems, with an increase in fatalistic alcoholism also being observed. As Japanese health and radiation specialist Shunichi Yamashita noted:

We know from Chernobyl that the psychological consequences are enormous. Life expectancy of the evacuees dropped from 65 to 58 years—not [predominately] because of cancer, but because of depression, alcoholism and suicide. Relocation is not easy, the stress is very big. We must not only track those problems, but also treat them. Otherwise people will feel they are just guinea pigs in our research.

The term "radiation phobia syndrome" was introduced in 1987 by L. A. Ilyin and O. A. Pavlovsky in their report "Radiological consequences of the Chernobyl accident in the Soviet Union and measures taken to mitigate their impact".

The author of Chernobyl Poems Lyubov Sirota wrote in her poem "Radiophobia":

Is this only—a fear of radiation?

Perhaps rather—a fear of wars?
Perhaps—the dread of betrayal,

Cowardice, stupidity, lawlessness?

The term has been criticized by Adolph Kharash, Science Director at the Moscow State University:

It treats the normal impulse to self-protection, natural to everything living, your moral suffering, your anguish and your concern about the fate of your children, relatives and friends, and your own physical suffering and sickness as a result of delirium, of pathological perversion.

However, the psychological phobia of radiation in sufferers may not coincide with an actual life-threatening exposure to an individual or their children. Radiophobia refers only to a display of anxiety disproportionate to the actual quantity of radiation one is exposed to, with, in many cases, radiation exposure values equal to, or not much higher than, what individuals are naturally exposed to every day from background radiation. Anxiety following a response to an actual life-threatening level of exposure to radiation is not considered to be radiophobia, nor misplaced anxiety, but a normal, appropriate response.

Marvin Goldman is an American doctor who provided commentary to newspapers claiming that radiophobia had taken a larger toll than the fallout itself had, and that radiophobia was to blame.

Chernobyl abortions

Following the accident, journalists mistrusted many medical professionals (such as the spokesman from the UK National Radiological Protection Board), and in turn encouraged the public to mistrust them.

Throughout the European continent, in nations where abortion is legal, many requests for induced abortions, of otherwise normal pregnancies, were obtained out of fears of radiation from Chernobyl; including an excess number of abortions of healthy human fetuses in Denmark in the months following the accident.

As the increase in radiation in Denmark was so low that almost no increased risk of birth defects was expected, the public debate and anxiety among the pregnant women and their husbands "caused" more fetal deaths in Denmark than the accident. This underlines the importance of public debate, the role of the mass media and of the way in which National Health authorities participate in this debate.

In Greece, following the accident there was panic and false rumors which led to many obstetricians initially thinking it prudent to interrupt otherwise wanted pregnancies and/or were unable to resist requests from worried pregnant mothers over fears of radiation; within a few weeks misconceptions within the medical profession were largely cleared up, although worries persisted in the general population. Although it was determined that the effective dose would not exceed 1 mSv (0.1 rem), a dose much lower than that which could induce embryonic abnormalities or other non-stochastic effects, there was an observed 2500 excess of otherwise wanted pregnancies being terminated, probably out of fear in the mother of some kind of perceived radiation risk.

A "slightly" above the expected number of induced abortions by request occurred in Italy, where, upon initial request, "a week of reflection" followed by a 2 to 3 week "health system" delay usually occur before the procedure.

Radiophobia and health effects

"My former colleague, William Clark, has likened the public’s frenzy over small environmental insults to the fear of witches in the later Middle Ages. Some million certified “witches” were executed because they could not prove that they had not caused harm to someone or something. In the same way, since one cannot prove that tiny amounts of radiation did not cause a particular leukemia—for that matter one cannot prove that they caused it either—those who wish to succumb to low-level phobia succumb. As a result nuclear energy […is] under siege. Not until the low–level controversy is resolved can we expect nuclear energy to be fully accepted."
Alvin M. Weinberg

The term "radiophobia" is also sometimes used in the arguments against proponents of the conservative LNT concept (Linear no-threshold response model for ionizing radiation) of radiation security proposed by the U.S. National Council on Radiation Protection and Measurements (NCRP) in 1949. The "no-threshold" position effectively assumes, from data extrapolated from the atomic bombings on Hiroshima and Nagasaki, that even negligible doses of radiation increase one's risk of cancer linearly as the exposure increases from a value of 0 up to high dose rates. The LNT model therefore suggests that radiation exposure from naturally occurring background radiation may be harmful. There is no biological evidence and weak statistical evidence that doses below 100 mSv have any biological effect.

After the Fukushima disaster, the German news magazine Der Spiegel reported that Japanese residents were suffering from radiophobia. British medical scientist Geraldine Thomas has also attributed suffering of the Japanese to radiophobia in interviews and formal presentations. Four years after the event The New York Times reported that ″about 1,600 people died from the stress of the evacuation″. The forced evacuation of 154,000 people ″was not justified by the relatively moderate radiation levels″, but was ordered because ″the government basically panicked″.

At the same time as part of the public fears radiation, some commercial products are also promoted on the basis of their radioactive content, such as "negative ion" bracelets or radon spas.

Radiophobia and industrial and healthcare use

Radiation, most commonly in the form of X-rays, is used frequently in society in order to produce positive outcomes. The primary uses of radiation in healthcare are in radiographic examination and procedures, and radiotherapy in the treatment of cancerous conditions. Radiophobia can be a fear which patients experience before and after either of these procedures; it is therefore the responsibility of the healthcare professional at the time, often a radiographer or radiation therapist, to reassure the patients about the stochastic and deterministic effects of radiation on human physiology. Advising patients and other irradiated persons of the various radiation protection measures that are enforced, including the use of lead-rubber aprons, dosimetry and automatic exposure control is a common method of informing and reassuring radiophobia sufferers.

Similarly, in industrial radiography, there is the possibility of persons experiencing radiophobia when radiophobia sufferers are near industrial radiographic equipment.

Preregistration (science)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Preregistration_(science) ...