Search This Blog

Monday, August 21, 2023

Health effects of wine

 

From Wikipedia, the free encyclopedia
A glass of red wine
Wine has a long history of use in the world of medicine and health.

The health effects of wine are mainly determined by its active ingredient – alcohol. Preliminary studies found that drinking small quantities of wine (up to one standard drink per day for women and one to two drinks per day for men), particularly of red wine, may be associated with a decreased risk of cardiovascular diseases, cognitive decline, stroke, diabetes mellitus, metabolic syndrome, and early death. Other studies found no such effects.

Drinking more than the standard drink amount increases the risk of cardiovascular diseases, high blood pressure, atrial fibrillation, stroke, and cancer. Mixed results are also observed in light drinking and cancer mortality.

Risk is greater in young people due to binge drinking, which may result in violence or accidents. About 88,000 deaths in the United States are estimated to be due to alcohol each year. Alcoholism reduces a person's life expectancy by around ten years and excessive alcohol use is the third leading cause of early death in the United States. According to systematic reviews and medical associations, people who are non-drinkers should never start drinking wine nor any other alcoholic drink.

The history of wine includes use as an early form of medication, being recommended variously as a safe alternative to drinking water, an antiseptic for treating wounds, a digestive aid, and as a cure for a wide range of ailments including lethargy, diarrhea, and pain from child birth. Ancient Egyptian papyri and Sumerian tablets dating back to 2200 BC detail the medicinal role of wine, making it the world's oldest documented human-made medicine. Wine continued to play a major role in medicine until the late 19th and early 20th century, when changing opinions and medical research on alcohol and alcoholism cast doubt on its role as part of a healthy lifestyle.

Moderate consumption

Some doctors define "moderate" consumption as one 5 oz (150 ml) glass of wine per day for women and two glasses per day for men.

Nearly all research into the positive medical benefits of wine consumption makes a distinction between moderate consumption and heavy or binge drinking. Moderate levels of consumption vary by the individual according to age, sex, genetics, weight and body stature, as well as situational conditions, such as food consumption or use of drugs. In general, women absorb alcohol more quickly than men due to their lower body water content, so their moderate levels of consumption may be lower than those for a male of equal age. Some experts define "moderate consumption" as less than one 5-US-fluid-ounce (150 ml) glass of wine per day for women and two glasses per day for men.

The view of consuming wine in moderation has a history recorded as early as the Greek poet Eubulus (360 BC) who believed that three bowls (kylix) were the ideal amount of wine to consume. The number of three bowls for moderation is a common theme throughout Greek writing; today the standard 750 ml wine bottle contains roughly the volume of three kylix cups (250 ml or 8 fl oz each). However, the kylix cups would have contained a diluted wine, at a 1:2 or 1:3 dilution with water. In his circa 375 BC play Semele or Dionysus, Eubulus has Dionysus say:

Three bowls do I mix for the temperate: one to health, which they empty first, the second to love and pleasure, the third to sleep. When this bowl is drunk up, wise guests go home. The fourth bowl is ours no longer, but belongs to violence; the fifth to uproar, the sixth to drunken revel, the seventh to black eyes, the eighth is the policeman's, the ninth belongs to biliousness, and the tenth to madness and hurling the furniture.

Emerging evidence suggests that "even drinking within the recommended limits may increase the overall risk of death from various causes". A 2018 systematic analysis found that "The level of alcohol consumption that minimised harm across health outcomes was zero (95% UI 0·0–0·8) standard drinks per week". On the other hand, a 2020 USDA systematic review found that "low average consumption was associated with lower risk of mortality compared with never drinking status". As of 2022, "moderate" consumption is usually defined in average consumption per day while the patterns of consumption vary and may have implications for risks and effects on health (such as habituation from daily consumption or nonlinear dosage-harm associations from intermittent excessive alcohol use). According to the CDC, it would be important to focus on the amount people drink on the days that they drink.

Effect on the body

Bones

Heavy alcohol consumption has been shown to have a damaging effect on the cellular processes that create bone tissue, and long-term alcoholic consumption at high levels increases the frequency of fractures. A 2012 study found no relation between wine consumption and bone mineral density.

Cancer

The International Agency for Research on Cancer of the World Health Organization has classified alcohol as a Group 1 carcinogen.

Cardiovascular system

The anticoagulant properties of alcohol in wine may have the potential of reducing the risk of blood clots associated with several cardiovascular diseases

Professional cardiology associations recommend that people who are currently nondrinkers should abstain from drinking alcohol. Heavy drinkers have increased risk for heart disease, cardiac arrhythmias, hypertension, and elevated cholesterol levels.

The alcohol in wine has anticoagulant properties that may limit blood clotting.

Digestive system

The risk of infection from the bacterium Helicobacter pylori, which is associated with gastritis and peptic ulcers, appears to be lower with moderate alcohol consumption.

Headaches

There are several potential causes of so-called "red wine headaches", including histamine and tannins from grape skin or other phenolic compounds in wine. Sulfites – which are used as a preservative in wine – are unlikely to be a headache factor. Wine, like other alcoholic beverages, is a diuretic which promotes dehydration that can lead to headaches (such as the case often experienced with hangovers), indicating a need to maintain hydration when drinking wine and to consume in moderation. A 2017 review found that 22% of people experiencing migraine or tension headaches identified alcohol as a precipitating factor, and red wine as three times more likely to trigger a headache than beer.

Food intake

Wine has a long history of being paired with food and may help reduce food intake by suppressing appetite.

Alcohol can stimulate the appetite so it is better to drink it with food. When alcohol is mixed with food, it can slow the stomach's emptying time and potentially decrease the amount of food consumed at the meal.

A 150-millilitre (5-US-fluid-ounce) serving of red or white wine provides about 500 to 540 kilojoules (120 to 130 kilocalories) of food energy, while dessert wines provide more. Most wines have an alcohol by volume (ABV) percentage of about 11%; the higher the ABV, the higher the energy content of a wine.

Psychological and social

Danish epidemiological studies suggest that a number of psychological health benefits are associated with drinking wine. In a study testing this idea, Mortensen et al. (2001) measured socioeconomic status, education, IQ, personality, psychiatric symptoms, and health related behaviors, which included alcohol consumption. The analysis was then broken down into groups of those who drank beer, those who drank wine, and then those who did and did not drink at all. The results showed that for both men and women drinking wine was related to higher parental socioeconomic status, parental education and the socioeconimic status of the subjects. When the subjects were given an IQ test, wine drinkers consistently scored higher IQs than their counterpart beer drinkers. The average difference of IQ between wine and beer drinkers was 18 points. In regards to psychological functioning, personality, and other health-related behaviors, the study found wine drinkers to operate at optimal levels while beer drinkers performed below optimal levels. As these social and psychological factors also correlate with health outcomes, they represent a plausible explanation for at least some of the apparent health benefits of wine.

However, more research should be conducted as to the relationship between wine consumption and IQ along with the apparent correlations between beer drinkers and wine drinkers and how they are different psychologically. The study conducted by Mortensen should not be read as gospel. Wine and Beer being an indicator of a person's IQ level should be viewed with a very cautious lens. This study, from what we know, does not take into account the genetic, prenatal and environmental influences of how a person's generalized intelligence is formed. In current scientific literature, it is still a matter of debate and discovery as to what are true and reliable indicators of intelligence. Regular wine consumption being an indicator of higher intelligence while beer being an indicator of low intelligence according to Mortensen et al. (2009) should be looked at with a very critical lens. There should be future research into the validity of whether or not individuals who regularly consume wine have higher IQ scores in comparison to those who drink beer.

Heavy metals

In 2008, researchers from Kingston University in London discovered red wine to contain high levels of toxic metals relative to other beverages in the sample. Although the metal ions, which included chromium, copper, iron, manganese, nickel, vanadium and zinc, were also present in other plant-based beverages, the sample wine tested significantly higher for all metal ions, especially vanadium. Risk assessment was calculated using "target hazard quotients" (THQ), a method of quantifying health concerns associated with lifetime exposure to chemical pollutants. Developed by the Environmental Protection Agency in the US and used mainly to examine seafood, a THQ of less than 1 represents no concern while, for example, mercury levels in fish calculated to have THQs of between 1 and 5 would represent cause for concern.

The researchers stressed that a single glass of wine would not lead to metal poisoning, pointing out that their THQ calculations were based on the average person drinking one-third of a bottle of wine (250 ml) every day between the ages of 18 and 80. However the "combined THQ values" for metal ions in the red wine they analyzed were reported to be as high as 125. A subsequent study by the same university using a meta analysis of data based on wine samples from a selection of mostly European countries found equally high levels of vanadium in many red wines, showing combined THQ values in the range of 50 to 200, with some as high as 350.

The findings sparked immediate controversy due to several issues: the study's reliance on secondary data; the assumption that all wines contributing to that data were representative of the countries stated; and the grouping together of poorly understood high-concentration ions, such as vanadium, with relatively low-level, common ions such as copper and manganese. Some publications pointed out that the lack of identifiable wines and grape varieties, specific producers or even wine regions, provided only misleading generalizations that should not be relied upon in choosing wines.

In a news bulletin following the widespread reporting of the findings, the UK's National Health Service (NHS) were also concerned that "the way the researchers added together hazards from different metals to produce a final score for individual wines may not be particularly meaningful". Commentators in the US questioned the relevance of seafood-based THQ assessments to agricultural produce, with the TTB, responsible for testing imports for metal ion contamination, have not detected an increased risk. George Solas, quality assessor for the Canadian Liquor Control Board of Ontario (LCBO) claimed that the levels of heavy metal contamination reported were within the permitted levels for drinking water in tested reservoirs.

Whereas the NHS also described calls for improved wine labeling as an "extreme response" to research which provided "few solid answers", they acknowledged the authors call for further research to investigate wine production, including the influence that grape variety, soil type, geographical region, insecticides, containment vessels and seasonal variations may have on metal ion uptake.

Chemical composition

Natural phenols and polyphenols

Although red wine contains many chemicals under basic research for their potential health benefits, resveratrol has been particularly well studied and evaluated by regulatory authorities, such as the European Food Safety Authority and US Food and Drug Administration which identified it and other such phenolic compounds as not sufficiently understood to confirm their role as physiological antioxidants.

Resveratrol

Red wine contains an average of 1.9 (±1.7) mg of trans-resveratrol per liter. For comparison, dietary supplements of resveratrol (trans-resveratrol content varies) may contain as much as 500 mg.

Resveratrol is a stilbenoid phenolic compound found in wine produced in the grape skins and leaves of grape vines.

The production and concentration of resveratrol is not equal among all the varieties of wine grapes. Differences in clones, rootstock, Vitis species as well as climate conditions can affect the production of resveratrol. Also, because resveratrol is part of the defence mechanism in grapevines against attack by fungi or grape disease, the degree of exposure to fungal infection and grape diseases also appear to play a role. The Muscadinia family of vines, which has adapted over time through exposure to North American grape diseases such as phylloxera, has some of the highest concentrations of resveratrol among wine grapes. Among the European Vitis vinifera, grapes derived from the Burgundian Pinot family tend to have substantially higher amounts of resveratrol than grapes derived from the Cabernet family of Bordeaux. Wine regions with cooler, wetter climates that are more prone to grape disease and fungal attacks such as Oregon and New York tend to produce grapes with higher concentrations of resveratrol than warmer, dry climates like California and Australia.

Although red wine and white vine varieties produce similar amounts of resveratrol, red wine contains more than white, since red wines are produced by maceration (soaking the grape skins in the mash). Other winemaking techniques, such as the use of certain strains of yeast during fermentation or lactic acid bacteria during malolactic fermentation, can have an influence on the amount of resveratrol left in the resulting wines. Similarly, the use of certain fining agents during the clarification and stabilization of wine can strip the wine of some resveratrol molecules.

Anthocyanins

Red grapes are high in anthocyanins which are the source of the color of various fruits, such as red grapes. The darker the red wine, the more anthocyanins present.

Typical concentrations of free anthocyanins in full-bodied young red wines are around 500 mg per liter. For comparison, 100 g of fresh bilberry contain 300–700 mg and 100 g FW elderberry contain around 603–1265 mg.

Following dietary ingestion, anthocyanins undergo rapid and extensive metabolism that makes the biological effects presumed from in vitro studies unlikely to apply in vivo.

Although anthocyanins are under basic and early-stage clinical research for a variety of disease conditions, there exists no sufficient evidence that they have any beneficial effect in the human body. The US FDA has issued warning letters, e.g., to emphasize that anthocyanins are not a defined nutrient, cannot be assigned a dietary content level and are not regulated as a drug to treat any human disease.

History of wine in medicine

Early medicine was intimately tied with religion and the supernatural, with early practitioners often being priests and magicians. Wine's close association with ritual made it a logical tool for these early medical practices. Tablets from Sumeria and papyri from Egypt dating to 2200 BC include recipes for wine based medicines, making wine the oldest documented human-made medicine.

Early history

Hippocrates, the father of modern medicine, prescribed wine for a variety of ailments including lethargy and diarrhea.
De medicina

When the Greeks introduced a more systematized approach to medicine, wine retained its prominent role. The Greek physician Hippocrates considered wine a part of a healthy diet, and advocated its use as a disinfectant for wounds, as well as a medium in which to mix other drugs for consumption by the patient. He also prescribed wine as a cure for various ailments ranging from diarrhea and lethargy to pain during childbirth.

The medical practices of the Romans involved the use of wine in a similar manner. In his 1st-century work De Medicina, the Roman encyclopedist Aulus Cornelius Celsus detailed a long list of Greek and Roman wines used for medicinal purposes. While treating gladiators in Asia Minor, the Roman physician Galen would use wine as a disinfectant for all types of wounds, and even soaked exposed bowels before returning them to the body. During his four years with the gladiators, only five deaths occurred, compared to sixty deaths under the watch of the physician before him.

Religion still played a significant role in promoting wine's use for health. The Jewish Talmud noted wine to be "the foremost of all medicines: wherever wine is lacking, medicines become necessary." In his first epistle to Timothy, Paul the Apostle recommended that his young colleague drink a little wine every now and then for the benefit of his stomach and digestion. While the Islamic Koran contained restrictions on all alcohol, Islamic doctors such as the Persian Avicenna in the 11th century AD noted that wine was an efficient digestive aid but, because of the laws, were limited to use as a disinfectant while dressing wounds. Catholic monasteries during the Middle Ages also regularly used wine for medical treatments. So closely tied was the role of wine and medicine, that the first printed book on wine was written in the 14th century by a physician, Arnaldus de Villa Nova, with lengthy essays on wine's suitability for treatment of a variety of medical ailments such dementia and sinus problems.

Risks of consumption

The lack of safe drinking water may have been one reason for wine's popularity in medicine. Wine was still being used to sterilize water as late as the Hamburg cholera epidemic of 1892 in order to control the spread of the disease. However, the late 19th century and early 20th century ushered in a period of changing views on the role of alcohol and, by extension, wine in health and society. The Temperance movement began to gain steam by touting the ills of alcoholism, which was eventually defined by the medical establishment as a disease. Studies of the long- and Short-term effects of alcohol consumption caused many in the medical community to reconsider the role of wine in medicine and diet. Soon, public opinion turned against consumption of alcohol in any form, leading to Prohibition in the United States and other countries. In some areas, wine was able to maintain a limited role, such as an exemption from Prohibition in the United States for "therapeutic wines" that were sold legally in drug stores. These wines were marketed for their supposed medicinal benefits, but some wineries used this measure as a loophole to sell large quantities of wine for recreational consumption. In response, the United States government issued a mandate requiring producers to include an emetic additive that would induce vomiting above the consumption of a certain dosage level.

The French paradox indicates that a diet high in fatty dairy products, such as cheeses, may be offset by red wine consumption to lower the risk of heart disease.

Throughout the mid to early 20th century, health advocates pointed to the risk of alcohol consumption and the role it played in a variety of ailments such as blood disorders, high blood pressure, cancer, infertility, liver damage, muscle atrophy, psoriasis, skin infections, strokes, and long-term brain damage. Studies showed a connection between alcohol consumption among pregnant mothers and an increased risk of mental retardation and physical abnormalities in what became known as fetal alcohol syndrome, prompting the use of alcohol packaging warning messages in several countries.

French paradox

The hypothesis of the French paradox assumes a low prevalence of heart disease due to the consumption of red wine despite a diet high in saturated fat. Although epidemiological studies indicate red wine consumption may support the French paradox, there is insufficient clinical evidence to confirm it, as of 2017.

Electronic skin

From Wikipedia, the free encyclopedia

Electronic skin refers to flexible, stretchable and self-healing electronics that are able to mimic functionalities of human or animal skin. The broad class of materials often contain sensing abilities that are intended to reproduce the capabilities of human skin to respond to environmental factors such as changes in heat and pressure.

Advances in electronic skin research focuses on designing materials that are stretchy, robust, and flexible. Research in the individual fields of flexible electronics and tactile sensing has progressed greatly; however, electronic skin design attempts to bring together advances in many areas of materials research without sacrificing individual benefits from each field. The successful combination of flexible and stretchable mechanical properties with sensors and the ability to self-heal would open the door to many possible applications including soft robotics, prosthetics, artificial intelligence and health monitoring.

Recent advances in the field of electronic skin have focused on incorporating green materials ideals and environmental awareness into the design process. As one of the main challenges facing electronic skin development is the ability of the material to withstand mechanical strain and maintain sensing ability or electronic properties, recyclability and self-healing properties are especially critical in the future design of new electronic skins.

Rehealable electronic skin

Self-healing abilities of electronic skin are critical to potential applications of electronic skin in fields such as soft robotics. Proper design of self-healing electronic skin requires not only healing of the base substrate but also the reestablishment of any sensing functions such as tactile sensing or electrical conductivity. Ideally, the self-healing process of electronic skin does not rely upon outside stimulation such as increased temperature, pressure, or solvation. Self-healing, or rehealable, electronic skin is often achieved through a polymer-based material or a hybrid material.

Polymer-based materials

In 2018, Zou et al. published work on electronic skin that is able to reform covalent bonds when damaged. The group looked at a polyimine-based crosslinked network, synthesized as seen in Figure 1. The e-skin is considered rehealable because of "reversible bond exchange," meaning that the bonds holding the network together are able to break and reform under certain conditions such as solvation and heating. The rehealable and reusable aspect of such a thermoset material is unique because many thermoset materials irreversibly form crosslinked networks through covalent bonds. In the polymer network the bonds formed during the healing process are indistinguishable from the original polymer network.

Figure 1. Polymerization scheme for formation of polyimine-based self-healing electronic skin.

Dynamic non-covalent crosslinking has also been shown to form a polymer network that is rehealable. In 2016, Oh et al. looked specifically at semiconducting polymers for organic transistors. They found that incorporating 2,6-pyridine dicarboxamide (PDCA) into the polymer backbone could impart self-healing abilities based on the network of hydrogen bonds formed between groups. With incorporation of PDCA in the polymer backbone, the materials was able to withstand up to 100% strain without showing signs of microscale cracking. In this example, the hydrogen bonds are available for energy dissipation as the strain increases.

Hybrid materials

Polymer networks are able to facilitate dynamic healing processes through hydrogen bonds or dynamic covalent chemistry. However, the incorporation of inorganic particles can greatly expand the functionality of polymer-based materials for electronic skin applications. The incorporation of micro-structured nickel particles into a polymer network (Figure 2) has been shown to maintain self-healing properties based on the reformation of hydrogen bonding networks around the inorganic particles. The material is able to regain its conductivity within 15 seconds of breakage, and the mechanical properties are regained after 10 minutes at room temperature without added stimulus. This material relies on hydrogen bonds formed between urea groups when they align. The hydrogen atoms of urea functional groups are ideally situated to form a hydrogen-bonding network because they are near an electron-withdrawing carbonyl group. This polymer network with embedded nickel particles demonstrates the possibility of using polymers as supramolecular hosts to develop self-healing conductive composites.

Figure 2. Self-healing material based on hydrogen bonding and interactions with micro-structured nickel particles.

Flexible and porous graphene foams that are interconnected in a 3D manner have also been shown to have self-healing properties. Thin film with poly(N,N-dimethylacrylamide)-poly(vinyl alcohol) (PDMAA) and reduced graphene oxide have shown high electrical conductivity and self-healing properties. The healing abilities of the hybrid composite are suspected to be due to the hydrogen bonds between the PDMAA chains, and the healing process is able to restore initial length and recover conductive properties.

Recyclable electronic skin

Zou et al. presents an interesting advance in the field of electronic skin that can be used in robotics, prosthetics, and many other applications in the form of a fully recyclable electronic skin material. The e-skin developed by the group consists of a network of covalently bound polymers that are thermoset, meaning cured at a specific temperature. However, the material is also recyclable and reusable. Because the polymer network is thermoset, it is chemically and thermally stable. However, at room temperature, the polyimine material, with or without silver nanoparticles, can be dissolved on the timescale of a few hours. The recycling process allows devices, which are damaged beyond self-healing capabilities, to be dissolved and formed into new devices (Figure 3). This advance opens the door for lower cost production and greener approaches to e-skin development.

Figure 3. Recycling process for conductive polyimine-based e-skin.

Flexible and stretchy electronic skin

The ability of electronic skin to withstand mechanical deformation including stretching and flexing without losing functionality is crucial for its applications as prosthetics, artificial intelligence, soft robotics, health monitoring, biocompatibility, and communication devices. Flexible electronics are often designed by depositing electronic materials on flexible polymer substrates, thereby relying on an organic substrate to impart favorable mechanical properties. Stretchable e-skin materials have been approached from two directions. Hybrid materials can rely on an organic network for stretchiness while embedding inorganic particles or sensors, which are not inherently stretchable. Other research has focused on developing stretchable materials that also have favorable electronic or sensing capabilities.

Zou et al. studied the inclusion of linkers that are described as "serpentine" in their polyimine matrix. These linkers make the e-skin sensors able to flex with movement and distortion. The incorporation of alkyl spacers in polymer-based materials has also been shown to increase flexibility without decreasing charge transfer mobility. Oh et al. developed a stretchable and flexible material based on 3,6-di(thiophen-2-yl)-2,5-dihydropyrrolo[3,4-c]pyrrole-1,4-dione (DPP) and non-conjugated 2,6-pyridine dicarboxamide (PDCA) as a source of hydrogen bonds (Figure 4).

Figure 4. A stretchable and self-healing semiconducting polymer-based material.

Graphene has also been shown to be a suitable material for electronic skin applications as well due to its stiffness and tensile strength. Graphene is an appealing material because its synthesis to flexible substrates is scalable and cost-efficient.

Mechanical Properties of Skin

Skin is composed of collagen, keratin, and elastin fibers, which provide robust mechanical strength, low modulus, tear resistance, and softness. The skin can be considered as a bilayer of epidermis and dermis. The epidermal layer has a modulus of about 140-600 kPa and a thickness of 0.05-1.5 mm. Dermis has a modulus of 2-80 kPa and a thickness of 0.3-3 mm. This bilayer skin exhibits an elastic linear response for strains less than 15% and a non linear response at larger strains. To achieve conformability, it is preferable for devices to match the mechanical properties of the epidermis layer when designing skin-based stretchy electronics.

Tuning Mechanical Properties

Conventional high performance electronic devices are made of inorganic materials such as silicon, which is rigid and brittle in nature and exhibits poor biocompatibility due to mechanical mismatch between the skin and the device, making skin integrated electronics applications difficult. To solve this challenge, researchers employed the method of constructing flexible electronics in the form of ultrathin layers. The resistance to bending of a material object (Flexural rigidity) is related to the third power of the thickness, according to the Euler-Bernoulli equation for a beam. It implies that objects with less thickness can bend and stretch more easily. As a result, even though the material has a relatively high Young's modulus, devices manufactured on ultrathin substrates exhibit a decrease in bending stiffness and allow bending to a small radius of curvature without fracturing. Thin devices have been developed as a result of significant advancements in the field of nanotechnology, fabrication, and manufacturing. The aforementioned approach was used to create devices composed of 100-200 nm thick Si nano membranes deposited on thin flexible polymeric substrates.

Furthermore, structural design considerations can be used to tune the mechanical stability of the devices. Engineering the original surface structure allows us to soften the stiff electronics. Buckling, island connection, and the Kirigami concept have all been employed successfully to make the entire system stretchy.

Mechanical buckling can be used to create wavy structures on elastomeric thin substrates. This feature improves the device's stretchability. The buckling approach was used to create Si nanoribbons from single crystal Si on an elastomeric substrate. The study demonstrated the device could bear a maximum strain of 10% when compressed and stretched.

In the case of island interconnect, the rigid material connects with flexible bridges made from different geometries, such as zig-zag, serpentine-shaped structures, etc., to reduce the effective stiffness, tune the stretchability of the system, and elastically deform under applied strains in specific directions. It has been demonstrated that serpentine-shaped structures have no significant effect on the electrical characteristics of epidermal electronics. It has also been shown that the entanglement of the interconnects, which oppose the movement of the device above the substrate, causes the spiral interconnects to stretch and deform significantly more than the serpentine structures. CMOS inverters constructed on a PDMS substrate employing 3D island interconnect technologies demonstrated 140% strain at stretching.

Kirigami is built around the concept of folding and cutting in 2D membranes. This contributes to an increase in the tensile strength of the substrate, as well as its out-of-plane deformation and stretchability. These 2D structures can subsequently be turned to 3D structures with varied topography, shape, and size controllability via the Buckling process, resulting in interesting properties and applications.

Conductive electronic skin

The development of conductive electronic skin is of interest for many electrical applications. Research into conductive electronic skin has taken two routes: conductive self-healing polymers or embedding conductive inorganic materials in non-conductive polymer networks.

The self-healing conductive composite synthesized by Tee et al. (Figure 2) investigated the incorporation of micro-structured nickel particles into a polymer host. The nickel particles adhere to the network though favorable interactions between the native oxide layer on the surface of the particles and the hydrogen-bonding polymer.

Nanoparticles have also been studied for their ability to impart conductivity on electronic skin materials. Zou et al. embedded silver nanoparticles (AgNPs) into a polymer matrix, making the e-skin conductive. The healing process for this material is noteworthy because it not only restores the mechanical properties of the polymer network, but also restores the conductive properties when silver nanoparticles have been embedded in the polymer network.

Sensing ability of electronic skin

Some of the challenges that face electronic skin sensing abilities include the fragility of sensors, the recovery time of sensors, repeatability, overcoming mechanical strain, and long-term stability.

Tactile sensors

Applied pressure can be measured by monitoring changes in resistance or capacitance. Coplanar interdigitated electrodes embedded on single-layer graphene have been shown to provide pressure sensitivity for applied pressure as low as 0.11 kPa through measuring changes in capacitance. Piezoresistive sensors have also shown high levels of sensitivity.

Ultrathin molybdenum disulfide sensing arrays integrated with graphene have demonstrated promising mechanical properties capable of pressure sensing. Modifications of organic field effect transistors (OFETs) have shown promise in electronic skin applications. Microstructured polydimethylsiloxane thin films can elastically deform when pressure is applied. The deformation of the thin film allows for storage and release of energy.

Visual representation of applied pressure has been one area of interest in development of tactile sensors. The Bao Group at Stanford University have designed an electrochromically active electronic skin that changes color with different amounts of applied pressure. Applied pressure can also be visualized by incorporation of active-matrix organic light-emitting diode displays which emit light when pressure is applied.

Prototype e-skins include a printed synaptic transistor–based electronic skin giving skin-like haptic sensations and touch/pain-sensitivity to a robotic hand, and a multilayer tactile sensor repairable hydrogel-based robot skin.

Other sensing applications

Humidity sensors have been incorporated in electronic skin design with sulfurized tungsten films. The conductivity of the film changes with different levels of humidity. Silicon nanoribbons have also been studied for their application as temperature, pressure, and humidity sensors. Scientists at the University of Glasgow have made inroads in developing an e-skin that feels pain real-time, with applications in prosthetics and more life-like humanoids.

A system of an electronic skin and a human-machine interface that can enable remote sensed tactile perception, and wearable or robotic sensing of many hazardous substances and pathogens.

Biodefense

From Wikipedia, the free encyclopedia
(Redirected from Biodefence)

Biodefense refers to measures to restore biosecurity to a group of organisms who are, or may be, subject to biological threats or infectious diseases. Biodefense is frequently discussed in the context of biowar or bioterrorism, and is generally considered a military or emergency response term.

Biodefense applies to two distinct target populations: civilian non-combatant and military combatant (troops in the field). Protection of water supplies and food supplies are often a critical part of biodefense.

Military

Troops in the field

Military biodefense in the United States began with the United States Army Medical Unit (USAMU) at Fort Detrick, Maryland, in 1956. (In contrast to the U.S. Army Biological Warfare Laboratories [1943–1969], also at Fort Detrick, the USAMU's mission was purely to develop defensive measures against bio-agents, as opposed to weapons development.) The USAMU was disestablished in 1969 and succeeded by today's United States Army Medical Research Institute of Infectious Diseases (USAMRIID).

The United States Department of Defense (or "DoD") has focused since at least 1998 on the development and application of vaccine-based biodefenses. In a July 2001 report commissioned by the DoD, the "DoD-critical products" were stated as vaccines against anthrax (AVA and Next Generation), smallpox, plague, tularemia, botulinum, ricin, and equine encephalitis. Note that two of these targets are toxins (botulinum and ricin) while the remainder are infectious agents.

Civilian

Role of public health and disease surveillance

It's extremely important to note that all of the classical and modern biological weapons organisms are animal diseases, the only exception being smallpox. Thus, in any use of biological weapons, it is highly likely that animals will become ill either simultaneously with, or perhaps earlier than humans.

Indeed, in the largest biological weapons accident known–the anthrax outbreak in Sverdlovsk (now Yekaterinburg) in the Soviet Union in 1979, sheep became ill with anthrax as far as 200 kilometers from the release point of the organism from a military facility in the southeastern portion of the city (known as Compound 19 and still off limits to visitors today, see Sverdlovsk anthrax leak).

Thus, a robust surveillance system involving human clinicians and veterinarians may identify a bioweapons attack early in the course of an epidemic, permitting the prophylaxis of disease in the vast majority of people (and/or animals) exposed but not yet ill.

For example, in the case of anthrax, it is likely that by 24–36 hours after an attack, some small percentage of individuals (those with compromised immune system or who had received a large dose of the organism due to proximity to the release point) will become ill with classical symptoms and signs (including a virtually unique chest X-ray finding, often recognized by public health officials if they receive timely reports). By making these data available to local public health officials in real time, most models of anthrax epidemics indicate that more than 80% of an exposed population can receive antibiotic treatment before becoming symptomatic, and thus avoid the moderately high mortality of the disease.

Identification of bioweapons

The goal of biodefense is to integrate the sustained efforts of the national and homeland security, medical, public health, intelligence, diplomatic, and police communities. Health care providers and public health officers are among the first lines of defense. In some countries private, local, and provincial (state) capabilities are being augmented by and coordinated with federal assets, to provide layered defenses against biological weapons attacks. During the first Gulf War the United Nations activated a biological and chemical response team, Task Force Scorpio, to respond to any potential use of weapons of mass destruction on civilians.

The traditional approach toward protecting agriculture, food, and water: focusing on the natural or unintentional introduction of a disease is being strengthened by focused efforts to address current and anticipated future biological weapons threats that may be deliberate, multiple, and repetitive.

The growing threat of biowarfare agents and bioterrorism has led to the development of specific field tools that perform on-the-spot analysis and identification of encountered suspect materials. One such technology, being developed by researchers from the Lawrence Livermore National Laboratory (LLNL), employs a "sandwich immunoassay", in which fluorescent dye-labeled antibodies aimed at specific pathogens are attached to silver and gold nanowires.

The U.S. National Institute of Allergy and Infectious Diseases (NIAID) also participates in the identification and prevention of biowarfare and first released a strategy for biodefense in 2002, periodically releasing updates as new pathogens are becoming topics of discussion. Within this list of strategies, responses for specific infectious agents are provided, along with the classification of these agents. NIAID provides countermeasures after the U.S. Department of Homeland Security details which pathogens hold the most threat.

Planning and response

Planning may involve the training human resources specialist and development of biological identification systems. Until recently in the United States, most biological defense strategies have been geared to protecting soldiers on the battlefield rather than ordinary people in cities. Financial cutbacks have limited the tracking of disease outbreaks. Some outbreaks, such as food poisoning due to E. coli or Salmonella, could be of either natural or deliberate origin.

Human Resource Training Programs
To date, several endangered countries have designed various training programs at their universities to train specialized personnel to deal with biological threats(for example: George Mason University Biodefense PhD program (USA) or Biodefense Strategic Studies PhD program designated by Dr Reza Aghanouri(Iran)). These programs are designed to prepare students and officers to serve as scholars and professionals in the fields of biodefense and biosecurity. These programs integrates knowledge of natural and man-made biological threats with the skills to develop and analyze policies and strategies for enhancing biosecurity. Other areas of biodefense, including nonproliferation, intelligence and threat assessment, and medical and public health preparedness are integral parts of these programs.


Preparedness
Biological agents are relatively easy to obtain by terrorists and are becoming more threatening in the U.S., and laboratories are working on advanced detection systems to provide early warning, identify contaminated areas and populations at risk, and to facilitate prompt treatment. Methods for predicting the use of biological agents in urban areas as well as assessing the area for the hazards associated with a biological attack are being established in major cities. In addition, forensic technologies are working on identifying biological agents, their geographical origins and/or their initial son. Efforts include decontamination technologies to restore facilities without causing additional environmental concerns.

Early detection and rapid response to bioterrorism depend on close cooperation between public health authorities and law enforcement; however, such cooperation is currently lacking. National detection assets and vaccine stockpiles are not useful if local and state officials do not have access to them.

United States strategy

In September 2018, President Trump and his administration unveiled a new comprehensive plan, the National Biodefense Strategy, for how the government will oversee bioterrorism defense. Currently, there are 15 federal departments and agencies and 16 branches of intelligence community that work against biological threats. The work of these groups often overlaps. So one of the goals of the National Biodefense Strategy to streamline the efforts of these agencies to prevent overlapping responsibilities.

The group of people in charge of overseeing biodefense policy will be the U.S. National Security Council. The Department of Health and Human Services will be in charge of carrying out the plan. Additionally, each year a special steering committee will review the policy and update changes and make budget requests as necessary.

The U.S. government had a comprehensive defense strategy against bioterror attacks in 2004, when then-President George W. Bush signed a Homeland Security Presidential Directive 10. The directive laid out the country's 21st Century biodefense system and assigned various tasks to federal agencies that would prevent, protect and mitigate biological attacks against our homeland and global interests. Since that time, however, the federal government has not had a comprehensive biodefense strategy. Daniel Gerstein, a senior policy researcher at the RAND Corporation and former acting undersecretary and deputy undersecretary of the Department of Homeland Security's Science and Technology Directorate said, "...we haven't had any major bioterror attacks [since the anthrax attacks of 2001] so this sort of leaves the public's consciousness and that's when complacency sets in."

Biosurveillance
In 1999, the University of Pittsburgh's Center for Biomedical Informatics deployed the first automated bioterrorism detection system, called RODS (Real-Time Outbreak Disease Surveillance). RODS is designed to draw collect data from many data sources and use them to perform signal detection, that is, to detect a possible bioterrorism event at the earliest possible moment. RODS, and other systems like it, collect data from sources including clinic data, laboratory data, and data from over-the-counter drug sales. In 2000, Michael Wagner, the codirector of the RODS laboratory, and Ron Aryel, a subcontractor, conceived the idea of obtaining live data feeds from "non-traditional" (non-health-care) data sources. The RODS laboratory's first efforts eventually led to the establishment of the National Retail Data Monitor, a system which collects data from 20,000 retail locations nationwide.

On February 5, 2002, George W. Bush visited the RODS laboratory and used it as a model for a $300 million spending proposal to equip all 50 states with biosurveillance systems. In a speech delivered at the nearby Masonic temple, Bush compared the RODS system to a modern "DEW" line (referring to the Cold War ballistic missile early warning system).

The principles and practices of biosurveillance, a new interdisciplinary science, were defined and described in the Handbook of Biosurveillance, edited by Michael Wagner, Andrew Moore and Ron Aryel, and published in 2006. Biosurveillance is the science of real-time disease outbreak detection. Its principles apply to both natural and man-made epidemics (bioterrorism).

Data which potentially could assist in early detection of a bioterrorism event include many categories of information. Health-related data such as that from hospital computer systems, clinical laboratories, electronic health record systems, medical examiner record-keeping systems, 911 call center computers, and veterinary medical record systems could be of help; researchers are also considering the utility of data generated by ranching and feedlot operations, food processors, drinking water systems, school attendance recording, and physiologic monitors, among others. Intuitively, one would expect systems which collect more than one type of data to be more useful than systems which collect only one type of information (such as single-purpose laboratory or 911 call-center based systems), and be less prone to false alarms, and this appears to be the case.

In Europe, disease surveillance is beginning to be organized on the continent-wide scale needed to track a biological emergency. The system not only monitors infected persons, but attempts to discern the origin of the outbreak.

Researchers are experimenting with devices to detect the existence of a threat:

  • Tiny electronic chips that would contain living nerve cells to warn of the presence of bacterial toxins (identification of broad range toxins)
  • Fiber-optic tubes lined with antibodies coupled to light-emitting molecules (identification of specific pathogens, such as anthrax, botulinum, ricin)

New research shows that ultraviolet avalanche photodiodes offer the high gain, reliability and robustness needed to detect anthrax and other bioterrorism agents in the air. The fabrication methods and device characteristics were described at the 50th Electronic Materials Conference in Santa Barbara on June 25, 2008. Details of the photodiodes were also published in the February 14, 2008 issue of the journal Electronics Letters and the November 2007 issue of the journal IEEE Photonics Technology Letters.

The United States Department of Defense conducts global biosurveillance through several programs, including the Global Emerging Infections Surveillance and Response System.

Response to bioterrorism incident or threat

Government agencies which would be called on to respond to a bioterrorism incident would include law enforcement, hazardous materials/decontamination units and emergency medical units. The US military has specialized units, which can respond to a bioterrorism event; among them are the United States Marine Corps' Chemical Biological Incident Response Force and the U.S. Army's 20th Support Command (CBRNE), which can detect, identify, and neutralize threats, and decontaminate victims exposed to bioterror agents. There are four hospitals capable of caring for anyone with an exposure to a BSL3 or BSL4 pathogen, the special clinical studies unit at National Institutes of Health is one of them. National Institutes of Health built a facility in April 2010. This unit has state of the art isolation capabilities with a unique airflow system. This unit is also being trained to care for patients who are ill due to a highly infectious pathogen outbreak, such as ebola. The doctors work closely with USAMRIID, NBACC and IRF. Special trainings take place regularly in order to maintain a high level of confidence to care for these patients.

Biodefense market

In 2015, global biodefense market was estimated at $9.8 billion. Experts correlated the large marketplace to an increase in government attention and support as a result of rising bioterrorism threats worldwide. Government's heightened interest is anticipated expand the industry into the foreseeable future. According to Medgadget.com, "Many government legislations like Project Bioshield offers nations with counter measures against chemical, radiological, nuclear and biological attack."

Project Bioshield offers accessible biological countermeasures targeting various strains of smallpox and anthrax. "Main goal of the project is creating funding authority to build next generation counter measures, make innovative research & development programs and create a body like FDA (Food & Drug Administration) that can effectively use treatments in case of emergencies." Increased funding, in addition to public health organizations' elevated consideration in biodefense technology investments, could trigger growth in the global biodefense market.

The global biodefense market is divided into geographical locations such as APAC, Latin America, Europe, MEA, and North America. The biodefense industry in North America lead the global industry by a large margin, making it the highest regional revenue share for 2015, contributing approximately $8.91 billion of revenue this year, due to immense funding and government reinforcements. The biodefense market in Europe is predicted to register a CAGR of 11.41% by the forecast timeline. The United Kingdom's Ministry of Defense granted $75.67 million designated for defense & civilian research, making it the highest regional industry share for 2012.

In 2016, Global Market Insights released a report covering the new trends in the biodefense market backed by detailed, scientific data. Industry leaders in biodefense market include the following corporations: Emergent Biosolutions, SIGA Technologies, Ichor Medical Systems Incorporation, PharmaAthene, Cleveland BioLabs Incorporation, Achaogen (bankrupt in 2019), Alnylam Pharmaceuticals, Avertis, Xoma Corporation, Dynavax Technologies Incorporation, Elusys Therapeutics, DynPort Vaccine Company LLC, Bavarian Nordic and Nanotherapeutics Incorporation.

Legislation

During the 115th Congress in July 2018, four Members of Congress, both Republican and Democrat (Anna Eshoo, Susan Brooks, Frank Palone and Greg Walden) introduced biodefense legislation called the Pandemic and All Hazards Preparedness and Advancing Innovation Act (PAHPA) (H.R. 6378). The bill strengthens the federal government's preparedness to deal with a wide range of public health emergencies, whether created through an act of bioterrorism or occurring through a natural disaster. The bill reauthorizes funding to improve bioterrorism and other public health emergency preparedness and response activities such as the Hospital Preparedness Program, the Public Health Emergency Preparedness Cooperative Agreement, Project BioShield, and BARDA for the advanced research and development of medical countermeasures (MCMs).

H.R. 6378 has 24 cosponsors from both political parties. On September 25, 2018, the House of Representatives passed the bill.

Aneutronic fusion

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Aneutronic_fusion
Lithium-6deuterium fusion reaction: an aneutronic fusion reaction, with energy released carried by alpha particles, not neutrons.

Aneutronic fusion is any form of fusion power in which very little of the energy released is carried by neutrons. While the lowest-threshold nuclear fusion reactions release up to 80% of their energy in the form of neutrons, aneutronic reactions release energy in the form of charged particles, typically protons or alpha particles. Successful aneutronic fusion would greatly reduce problems associated with neutron radiation such as damaging ionizing radiation, neutron activation, reactor maintenance, and requirements for biological shielding, remote handling and safety.

Since it is simpler to convert the energy of charged particles into electrical power than it is to convert energy from uncharged particles, an aneutronic reaction would be attractive for power systems. Some proponents see a potential for dramatic cost reductions by converting energy directly to electricity, as well as in eliminating the radiation from neutrons, which are difficult to shield against. However, the conditions required to harness aneutronic fusion are much more extreme than those required for deuterium-tritium fusion such as at ITER or Wendelstein 7-X.

History

The first experiments in the field started in 1939, and serious efforts have been continual since the early 1950s.

An early supporter was Richard F. Post at Lawrence Livermore. He proposed to capture the kinetic energy of charged particles as they were exhausted from a fusion reactor and convert this into voltage to drive current. Post helped develop the theoretical underpinnings of direct conversion, later demonstrated by Barr and Moir. They demonstrated a 48 percent energy capture efficiency on the Tandem Mirror Experiment in 1981.

Polywell fusion was pioneered by the late Robert W. Bussard in 1995 and funded by the US Navy. Polywell uses inertial electrostatic confinement. He founded EMC2 to continue polywell research.

A picosecond pulse of a 10-terawatt laser produced hydrogen–boron aneutronic fusions for a Russian team in 2005. However, the number of the resulting α particles (around 103 per laser pulse) was low.

In 2006, the Z-machine at Sandia National Laboratory, a z-pinch device, reached 2 billion kelvins and 300 keV.

In 2011, Lawrenceville Plasma Physics published initial results and outlined a theory and experimental program for aneutronic fusion with the dense plasma focus (DPF). The effort was initially funded by NASA's Jet Propulsion Laboratory. Support for other DPF aneutronic fusion investigations came from the Air Force Research Laboratory.

A French research team fused protons and boron-11 nuclei using a laser-accelerated proton beam and high-intensity laser pulse. In October 2013 they reported an estimated 80 million fusion reactions during a 1.5 nanosecond laser pulse.

In 2016, a team at the Shanghai Chinese Academy of Sciences produced a laser pulse of 5.3 petawatts with the Superintense Ultrafast Laser Facility (SULF) and expected to reach 10 petawatts with the same equipment.

In 2021, TAE Technologies field-reversed configuration announced that its Norman device was regularly producing a stable plasma at temperatures over 50 million degrees.

In 2021, a Russian team reported experimental results in a miniature device with electrodynamic (oscillatory) plasma confinement. It used a ~1–2 J nanosecond vacuum discharge with a virtual cathode. Its field accelerates ions to ~100–300 keV under oscillating ions' collisions. α particles of about 5×104/4π (~10 α particles/ns) were obtained within a total of 4 μs of the voltage applications.

HB11 Energy is an Australian spin-off company created in September 2019. HB11 holds the patents of UNSW's theoretical physicist Heinrich Hora. Its device uses two petawatt-class, chirped pulse lasers to drive low-temperature proton-boron fusion using an "in-target" approach. One laser drives hydrogen atoms via target normal sheath acceleration towards a boron plasma confined by a kilotesla magnetic field powered by the other laser. The resulting He+ ions are converted to electricity without a thermal conversion step with its associated thermal losses. The pico-second laser produces an avalanche reaction that offers a billion time increased fusion yield improvement compared to other ICF systems. In 2022, they claimed to be the first commercial company to demonstrate fusion. It demonstrated fusion, yielding an alpha particle flux of 1010/sr, one order of magnitude higher than its earlier results, but still 4 orders of magnitude away from net energy gain.

Definition

Fusion reactions can be categorized according to their neutronicity: the fraction of the fusion energy released as energetic neutrons. The State of New Jersey defined an aneutronic reaction as one in which neutrons carry no more than 1% of the total released energy, although many papers on the subject include reactions that do not meet this criterion.

Coulomb barrier

The Coulomb barrier is the minimum energy required for the nuclei in a fusion reaction to overcome their mutual electrostatic repulsion. Repulsive force between a particle with charge +Z1 and one with +Z2 is proportional to (Z1*Z2)/r2, where r is the distance between them. The Coulomb barrier facing a pair of reacting, charged particles depends both on total charge and on how equally those charges are distributed - with the barrier being lowest when a low-Z particle reacts with a high-Z one, and highest where the reactants are of roughly equal charge. Barrier energy is thus minimized for those ions with the fewest protons.

Once the nuclear potential wells of the two reacting particles are within two proton radii of each other, the two can begin attracting one another via nuclear force. This interaction is much stronger than even electromagnetic interaction, which means the particles will be drawn together - releasing nuclear energy despite the ongoing electrical repulsion. Nuclear force is a very short-range force, though, so it is a little oversimplified to say it increases with the number of nucleons. The statement is true when describing volume energy or surface energy of a nucleus, less true when addressing Coulomb energy, and does not speak to proton/neutron balance at all. Once reactants have gone past the Coulomb barrier, they're into a world dominated by a force that does not behave like electromagnetism.

In most fusion concepts, the energy needed to overcome the Coulomb barrier is provided by collisions with other fuel ions. In a thermalized fluid like a plasma, the temperature corresponds to an energy spectrum according to the Maxwell–Boltzmann distribution. Gases in this state have some particles with high energy even if the average energy is much lower. Fusion devices rely on this distribution; even at bulk temperatures far below the Coulomb barrier energy, the energy released by the reactions is great enough that capturing some of that can supply sufficient high-energy ions to keep the reaction going.

Thus, steady operation of the reactor is based on a balance between the rate that energy is added to the fuel by fusion reactions and the rate energy is lost to the surroundings. This concept is best expressed as the fusion triple product, the product of the temperature, density and "confinement time", the amount of time energy remains in the fuel before escaping to the environment. The product of temperature and density gives the reaction rate for any given fuel. The rate of reaction is proportional to the nuclear cross section (σ).

Any given device can sustain some maximum plasma pressure. An efficient device would continuously operate near this maximum. Given this pressure, the largest fusion output is obtained when the temperature is such that <σv>/T2 is a maximum. This is also the temperature at which the value of the triple product nTτ required for ignition is a minimum, since that required value is inversely proportional to <σv>/T2. A plasma is "ignited" if the fusion reactions produce enough power to maintain the temperature without external heating.

Because the Coulomb barrier is proportional to the product of proton counts (Z1*Z2) of the two reactants, varieties of heavy hydrogen, deuterium and tritium (D-T), give the fuel with the lowest total Coulomb barrier. All other potential fuels have higher Coulomb barriers, and thus require higher operational temperatures. Additionally, D-T fuels have the highest nuclear cross-sections, which means the reaction rates are higher than any other fuel. This makes D-T fusion the easiest to achieve. Comparing the potential of other fuels to the D-T reaction. The table below shows the ignition temperature and cross-section for three of the candidate aneutronic reactions, compared to D-T:

Candidate reactions
Reaction Ignition
T [keV]
Cross-section

<σv>/T2 [m3/s/keV2]

2
1
D
-3
1
T
13.6 1.24×10−24
2
1
D
-3
2
He
58 2.24×10−26
p+-6
3
Li
66 1.46×10−27
p+-11
5
B
123 3.01×10−27

As can be seen, the easiest to ignite of the aneutronic reactions, D-3He, has an ignition temperature over four times as high as that of the D-T reaction, and correspondingly lower cross-sections, while the p-11B reaction is nearly ten times more difficult to ignite.

Candidate reactions

Several fusion reactions produce no neutrons on any of their branches. Those with the largest cross sections are:

High nuclear cross section aneutronic reactions
Isotopes Reaction
Deuterium - 3He 2D + 3He   4He + 1p + 18.3 MeV
Deuterium - 6lithium 2D + 6Li 2 4He     + 22.4 MeV
Proton - 6lithium 1p + 6Li
4He + 3He + 4.0 MeV
3He – 6lithium 3He + 6Li 2 4He + 1p + 16.9 MeV
3He - 3He 3He + 3He   4He + 2 1p + 12.86 MeV
Proton – Lithium-7 1p + 7Li 2 4He     + 17.2 MeV
Proton – Boron-11 1p + 11B 3 4He     + 8.7 MeV
Proton – Nitrogen 1p + 15N   12C + 4He + 5.0 MeV

Candidate fuels

3He

The 3He–D reaction has been studied as an alternative fusion plasma because it has the lowest energy threshold.

The p–6Li, 3He–6Li, and 3He–3He reaction rates are not particularly high in a thermal plasma. When treated as a chain, however, they offer the possibility of enhanced reactivity due to a non-thermal distribution. The product 3He from the p–6Li reaction could participate in the second reaction before thermalizing, and the product p from 3He–6Li could participate in the former before thermalizing. Detailed analyses, however, do not show sufficient reactivity enhancement to overcome the inherently low cross section.

The 3He reaction suffers from a 3He availability problem. 3He occurs in only minuscule amounts on Earth, so it would either have to be bred from neutron reactions (counteracting the potential advantage of aneutronic fusion) or mined from extraterrestrial sources.

The amount of 3He needed for large-scale applications can also be described in terms of total consumption: according to the US Energy Information Administration, "Electricity consumption by 107 million U.S. households in 2001 totaled 1,140 billion kW·h" (1.14×1015 W·h). Again assuming 100% conversion efficiency, 6.7 tonnes per year of 3He would be required for that segment of the energy demand of the United States, 15 to 20 tonnes per year given a more realistic end-to-end conversion efficiency. Extracting that amount of pure 3He would entail processing 2 billion tonnes of lunar material per year, even assuming a recovery rate of 100%.

In 2022, Helion Energy claimed that their 7th fusion prototype (Polaris; fully funded and under construction as of September 2022) will demonstrate "net electricity from fusion", and will demonstrate "helium-3 production through deuterium-deuterium fusion" by means of a "patented high-efficiency closed-fuel cycle".

Deuterium

Although the deuterium reactions (deuterium + 3He and deuterium + 6lithium) do not in themselves release neutrons, in a fusion reactor the plasma would also produce D-D side reactions that result in reaction product of 3He plus a neutron. Although neutron production can be minimized by running a plasma reaction hot and deuterium-lean, the fraction of energy released as neutrons is probably several percent, so that these fuel cycles, although neutron-poor, do not meet the 1% threshold. See 3He. The D-3He reaction also suffers from the 3He fuel availability problem, as discussed above.

Lithium

Fusion reactions involving lithium are well studied due to the use of lithium for breeding tritium in thermonuclear weapons. They are intermediate in ignition difficulty between the reactions involving lower atomic-number species, H and He, and the 11B reaction.

The p–7Li reaction, although highly energetic, releases neutrons because of the high cross section for the alternate neutron-producing reaction 1p + 7Li → 7Be + n

Boron

Many studies of aneutronic fusion concentrate on the p–11B reaction, which uses easily available fuel. The fusion of the boron nucleus with a proton produces energetic alpha particles (helium nuclei).

Since igniting the p–11B reaction is much more difficult than D-T, alternatives to the usual tokamak fusion reactors are usually proposed, such as inertial confinement fusion. One proposed method uses one laser to create a boron-11 plasma and another to create a stream of protons that smash into the plasma. The proton beam produces a tenfold increase of fusion because protons and boron nuclei collide directly. Earlier methods used a solid boron target, "protected" by its electrons, which reduced the fusion rate. Experiments suggest that a petawatt-scale laser pulse could launch an 'avalanche' fusion reaction, although this remains controversial. The plasma lasts about one nanosecond, requiring the picosecond pulse of protons to be precisely synchronized. Unlike conventional methods, this approach does not require a magnetically confined plasma. The proton beam is preceded by an electron beam, generated by the same laser, that strips electrons in the boron plasma, increasing the protons' chance to collide with the boron nuclei and fuse.

Residual radiation

Calculations show that at least 0.1% of the reactions in a thermal p–11B plasma produce neutrons, although their energy accounts for less than 0.2% of the total energy released.

These neutrons come primarily from the reaction:

11B + α14N + n + 157 keV

The reaction itself produces only 157 keV, but the neutron carries a large fraction of the alpha energy, close to Efusion/3 = 2.9 MeV. Another significant source of neutrons is:

11B + p → 11C + n − 2.8 MeV.

These neutrons are less energetic, with an energy comparable to the fuel temperature. In addition, 11C itself is radioactive, but quickly decays to 11B with a half life of only 20 minutes.

Since these reactions involve the reactants and products of the primary reaction, it is difficult to lower the neutron production by a significant fraction. A clever magnetic confinement scheme could in principle suppress the first reaction by extracting the alphas as they are created, but then their energy would not be available to keep the plasma hot. The second reaction could in principle be suppressed relative to the desired fusion by removing the high energy tail of the ion distribution, but this would probably be prohibited by the power required to prevent the distribution from thermalizing.

In addition to neutrons, large quantities of hard X-rays are produced by bremsstrahlung, and 4, 12, and 16 MeV gamma rays are produced by the fusion reaction

11B + p → 12C + γ + 16.0 MeV

with a branching probability relative to the primary fusion reaction of about 10−4.

The hydrogen must be isotopically pure and the influx of impurities into the plasma must be controlled to prevent neutron-producing side reactions such as:

11B + d → 12C + n + 13.7 MeV
d + d → 3He + n + 3.27 MeV

The shielding design reduces the occupational dose of both neutron and gamma radiation to a negligible level. The primary components are water (to moderate the fast neutrons), boron (to absorb the moderated neutrons) and metal (to absorb X-rays). The total thickness is estimated to be about one meter, mostly water.

Energy capture

Aneutronic fusion produces energy in the form of charged particles instead of neutrons. This means that energy from aneutronic fusion could be captured using direct conversion instead of thermally. Direct conversion can be either inductive, based on changes in magnetic fields, electrostatic, based on pitting charged particles against an electric field, or photoelectric, in which light energy is captured in a pulsed mode.

Electrostatic direct conversion uses the motion of charged particles to create voltage. This voltage drives electricity in a wire which becomes electrical power. It is the reverse of phenomena that use a voltage to put a particle in motion. It has been described as a linear accelerator running backwards.

Aneutronic fusion loses much of its energy as light. This energy results from the acceleration and deceleration of charged particles. These speed changes can be caused by bremsstrahlung radiation or cyclotron radiation or synchrotron radiation or electric field interactions. The radiation can be estimated using the Larmor formula and comes in the X-ray, IR, UV and visible spectra. Some of the energy radiated as X-rays may be converted directly to electricity. Because of the photoelectric effect, X-rays passing through an array of conducting foils transfer some of their energy to electrons, which can then be captured electrostatically. Since X-rays can go through far greater material thickness than electrons, many hundreds or thousands of layers are needed to absorb them.

Technical challenges

Many challenges confront the commercialization of aneutronic fusion.

Temperature

The large majority of fusion research has gone toward D-T fusion, which is the easiest to achieve. Fusion experiments typically use deuterium-deuterium fusion (D-D) because deuterium is cheap and easy to handle, being non-radioactive. Experimenting with D-T fusion is more difficult because tritium is expensive and radioactive, requiring additional environmental protection and safety measures.

The combination of lower cross-section and higher loss rates in D-3He fusion is offset to a degree because the reactants are mainly charged particles that deposit their energy in the plasma. This combination of offsetting features demands an operating temperature about four times that of a D-T system. However, due to the high loss rates and consequent rapid cycling of energy, the confinement time of a working reactor needs to be about fifty times higher than D-T, and the energy density about 80 times higher. This requires significant advances in plasma physics.

Proton–boron fusion requires ion energies, and thus plasma temperatures, some nine times higher than those for D-T fusion. For any given density of the reacting nuclei, the reaction rate for proton-boron achieves its peak rate at around 600 keV (6.6 billion degrees Celsius or 6.6 gigakelvins)[42] while D-T has a peak at around 66 keV (765 million degrees Celsius, or 0.765 gigakelvin). For pressure-limited confinement concepts, optimum operating temperatures are about 5 times lower, but the ratio is still roughly ten-to-one.

Power balance

The peak reaction rate of p–11B is only one third that for D-T, requiring better plasma confinement. Confinement is usually characterized by the time τ the energy is retained so that the power released exceeds that required to heat the plasma. Various requirements can be derived, most commonly the Lawson criterion, the product of the density, nτ, and the product with the pressure nTτ. The nτ required for p–11B is 45 times higher than that for D-T. The nTτ required is 500 times higher. Since the confinement properties of conventional fusion approaches, such as the tokamak and laser pellet fusion are marginal, most aneutronic proposals use radically different confinement concepts.

In most fusion plasmas, bremsstrahlung radiation is a major energy loss channel. (See also bremsstrahlung losses in quasineutral, isotropic plasmas.) For the p–11B reaction, some calculations indicate that the bremsstrahlung power will be at least 1.74 times larger than the fusion power. The corresponding ratio for the 3He-3He reaction is only slightly more favorable at 1.39. This is not applicable to non-neutral plasmas, and different in anisotropic plasmas.

In conventional reactor designs, whether based on magnetic or inertial confinement, the bremsstrahlung can easily escape the plasma and is considered a pure energy loss term. The outlook would be more favorable if the plasma could reabsorb the radiation. Absorption occurs primarily via Thomson scattering on the electrons, which has a total cross section of σT = 6.65×10−29 m2. In a 50–50 D-T mixture this corresponds to a range of 6.3 g/cm2. This is considerably higher than the Lawson criterion of ρR > 1 g/cm2, which is already difficult to attain, but might be achievable in inertial confinement systems.

In megatesla magnetic fields a quantum mechanical effect might suppress energy transfer from the ions to the electrons. According to one calculation, bremsstrahlung losses could be reduced to half the fusion power or less. In a strong magnetic field cyclotron radiation is even larger than the bremsstrahlung. In a megatesla field, an electron would lose its energy to cyclotron radiation in a few picoseconds if the radiation could escape. However, in a sufficiently dense plasma (ne > 2.5×1030 m−3, a density greater than that of a solid), the cyclotron frequency is less than twice the plasma frequency. In this well-known case, the cyclotron radiation is trapped inside the plasmoid and cannot escape, except from a very thin surface layer.

While megatesla fields have not yet been achieved, fields of 0.3 megatesla have been produced with high intensity lasers, and fields of 0.02–0.04 megatesla have been observed with the dense plasma focus device.

At much higher densities (ne > 6.7×10−34 m−3), the electrons will be Fermi degenerate, which suppresses bremsstrahlung losses, both directly and by reducing energy transfer from the ions to the electrons. If necessary conditions can be attained, net energy production from p–11B or D–3He fuel may be possible. The probability of a feasible reactor based solely on this effect remains low, however, because the gain is predicted to be less than 20, while more than 200 is usually considered to be necessary.

Power density

In every published fusion power plant design, the part of the plant that produces the fusion reactions is much more expensive than the part that converts the nuclear power to electricity. In that case, as indeed in most power systems, power density is an important characteristic. Doubling power density at least halves the cost of electricity. In addition, the confinement time required depends on the power density.

It is, however, not trivial to compare the power density produced by different fusion fuel cycles. The case most favorable to p–11B relative to D-T fuel is a (hypothetical) confinement device that only works well at ion temperatures above about 400 keV, in which the reaction rate parameter <σv> is equal for the two fuels, and that runs with low electron temperature. p–11B does not require as long a confinement time because the energy of its charged products is two and a half times higher than that for D-T. However, relaxing these assumptions, for example by considering hot electrons, by allowing the D-T reaction to run at a lower temperature or by including the energy of the neutrons in the calculation shifts the power density advantage to D-T.

The most common assumption is to compare power densities at the same pressure, choosing the ion temperature for each reaction to maximize power density, and with the electron temperature equal to the ion temperature. Although confinement schemes can be and sometimes are limited by other factors, most well-investigated schemes have some kind of pressure limit. Under these assumptions, the power density for p–11B is about 2,100 times smaller than that for D-T. Using cold electrons lowers the ratio to about 700. These numbers are another indication that aneutronic fusion power is not possible with mainline confinement concepts.

Internet research

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Internet_...