Search This Blog

Tuesday, March 25, 2025

Pharmacogenomics

From Wikipedia, the free encyclopedia

Pharmacogenomics, often abbreviated "PGx," is the study of the role of the genome in drug response. Its name (pharmaco- + genomics) reflects its combining of pharmacology and genomics. Pharmacogenomics analyzes how the genetic makeup of a patient affects their response to drugs. It deals with the influence of acquired and inherited genetic variation on drug response, by correlating DNA mutations (including point mutations, copy number variations, and structural variations) with pharmacokinetic (drug absorption, distribution, metabolism, and elimination), pharmacodynamic (effects mediated through a drug's biological targets), and/or immunogenic endpoints.

Pharmacogenomics aims to develop rational means to optimize drug therapy, with regard to the patients' genotype, to achieve maximum efficiency with minimal adverse effects. It is hoped that by using pharmacogenomics, pharmaceutical drug treatments can deviate from what is dubbed as the "one-dose-fits-all" approach. Pharmacogenomics also attempts to eliminate trial-and-error in prescribing, allowing physicians to take into consideration their patient's genes, the functionality of these genes, and how this may affect the effectiveness of the patient's current or future treatments (and where applicable, provide an explanation for the failure of past treatments). Such approaches promise the advent of precision medicine and even personalized medicine, in which drugs and drug combinations are optimized for narrow subsets of patients or even for each individual's unique genetic makeup.

Whether used to explain a patient's response (or lack of it) to a treatment, or to act as a predictive tool, it hopes to achieve better treatment outcomes and greater efficacy, and reduce drug toxicities and adverse drug reactions (ADRs). For patients who do not respond to a treatment, alternative therapies can be prescribed that would best suit their requirements. In order to provide pharmacogenomic recommendations for a given drug, two possible types of input can be used: genotyping, or exome or whole genome sequencing. Sequencing provides many more data points, including detection of mutations that prematurely terminate the synthesized protein (early stop codon).

Pharmacogenetics vs. pharmacogenomics

The term pharmacogenomics is often used interchangeably with pharmacogenetics. Although both terms relate to drug response based on genetic influences, there are differences between the two. Pharmacogenetics is limited to monogenic phenotypes (i.e., single gene-drug interactions). Pharmacogenomics refers to polygenic drug response phenotypes and encompasses transcriptomics, proteomics, and metabolomics.

Mechanisms of pharmacogenetic interactions

Pharmacokinetics

Pharmacokinetics involves the absorption, distribution, metabolism, and elimination of pharmaceutics. These processes are often facilitated by enzymes such as drug transporters or drug metabolizing enzymes (discussed in-depth below). Variation in DNA loci responsible for producing these enzymes can alter their expression or activity so that their functional status changes. An increase, decrease, or loss of function for transporters or metabolizing enzymes can ultimately alter the amount of medication in the body and at the site of action. This may result in deviation from the medication's therapeutic window and result in either toxicity or loss of effectiveness.

Drug-metabolizing enzymes

The majority of clinically actionable pharmacogenetic variation occurs in genes that code for drug-metabolizing enzymes, including those involved in both phase I and phase II metabolism. The cytochrome P450 enzyme family is responsible for metabolism of 70-80% of all medications used clinically.[11] CYP3A4, CYP2C9, CYP2C19, and CYP2D6 are major CYP enzymes involved in drug metabolism and are all known to be highly polymorphic. Additional drug-metabolizing enzymes that have been implicated in pharmacogenetic interactions include UGT1A1 (a UDP-glucuronosyltransferase), DPYD, and TPMT.

Drug transporters

Many medications rely on transporters to cross cellular membranes in order to move between body fluid compartments such as the blood, gut lumen, bile, urine, brain, and cerebrospinal fluid. The major transporters include the solute carrier, ATP-binding cassette, and organic anion transporters. Transporters that have been shown to influence response to medications include OATP1B1 (SLCO1B1) and breast cancer resistance protein (BCRP) (ABCG2).

Pharmacodynamics

Pharmacodynamics refers to the impact a medication has on the body, or its mechanism of action.

Drug targets

Drug targets are the specific sites where a medication carries out its pharmacological activity. The interaction between the drug and this site results in a modification of the target that may include inhibition or potentiation. Most of the pharmacogenetic interactions that involve drug targets are within the field of oncology and include targeted therapeutics designed to address somatic mutations (see also Cancer Pharmacogenomics). For example, EGFR inhibitors like gefitinib (Iressa) or erlotinib (Tarceva) are only indicated in patients carrying specific mutations to EGFR.

Germline mutations in drug targets can also influence response to medications, though this is an emerging subfield within pharmacogenomics. One well-established gene-drug interaction involving a germline mutation to a drug target is warfarin (Coumadin) and VKORC1, which codes for vitamin K epoxide reductase (VKOR). Warfarin binds to and inhibits VKOR, which is an important enzyme in the vitamin K cycle. Inhibition of VKOR prevents reduction of vitamin K, which is a cofactor required in the formation of coagulation factors II, VII, IX and X, and inhibitors protein C and S.

Off-target sites

Medications can have off-target effects (typically unfavorable) that arise from an interaction between the medication and/or its metabolites and a site other than the intended target. Genetic variation in the off-target sites can influence this interaction. The main example of this type of pharmacogenomic interaction is glucose-6-phosphate-dehydrogenase (G6PD). G6PD is the enzyme involved in the first step of the pentose phosphate pathway which generates NADPH (from NADP). NADPH is required for the production of reduced glutathione in erythrocytes and it is essential for the function of catalase. Glutathione and catalase protect cells from oxidative stress that would otherwise result in cell lysis. Certain variants in G6PD result in G6PD deficiency, in which cells are more susceptible to oxidative stress. When medications that have a significant oxidative effect are administered to individuals who are G6PD deficient, they are at an increased risk of erythrocyte lysis that presents as hemolytic anemia.

Immunologic

The human leukocyte antigen (HLA) system, also referred to as the major histocompatibility complex (MHC), is a complex of genes important for the adaptive immune system. Mutations in the HLA complex have been associated with an increased risk of developing hypersensitivity reactions in response to certain medications.

Clinical pharmacogenomics resources

Clinical Pharmacogenetics Implementation Consortium (CPIC)

The Clinical Pharmacogenetics Implementation Consortium (CPIC) is "an international consortium of individual volunteers and a small dedicated staff who are interested in facilitating use of pharmacogenetic tests for patient care. CPIC’s goal is to address barriers to clinical implementation of pharmacogenetic tests by creating, curating, and posting freely available, peer-reviewed, evidence-based, updatable, and detailed gene/drug clinical practice guidelines. CPIC guidelines follow standardized formats, include systematic grading of evidence and clinical recommendations, use standardized terminology, are peer-reviewed, and are published in a journal (in partnership with Clinical Pharmacology and Therapeutics) with simultaneous posting to cpicpgx.org, where they are regularly updated."

The CPIC guidelines are "designed to help clinicians understand HOW available genetic test results should be used to optimize drug therapy, rather than WHETHER tests should be ordered. A key assumption underlying the CPIC guidelines is that clinical high-throughput and pre-emptive (pre-prescription) genotyping will become more widespread, and that clinicians will be faced with having patients’ genotypes available even if they have not explicitly ordered a test with a specific drug in mind. CPIC's guidelines, processes and projects have been endorsed by several professional societies."

U.S. Food and Drug Administration

Table of Pharmacogenetic Associations

In February 2020 the FDA published the Table of Pharmacogenetic Associations. For the gene-drug pairs included in the table, "the FDA has evaluated and believes there is sufficient scientific evidence to suggest that subgroups of patients with certain genetic variants, or genetic variant-inferred phenotypes (such as affected subgroup in the table below), are likely to have altered drug metabolism, and in certain cases, differential therapeutic effects, including differences in risks of adverse events."

"The information in this Table is intended primarily for prescribers, and patients should not adjust their medications without consulting their prescriber. This version of the table is limited to pharmacogenetic associations that are related to drug metabolizing enzyme gene variants, drug transporter gene variants, and gene variants that have been related to a predisposition for certain adverse events. The FDA recognizes that various other pharmacogenetic associations exist that are not listed here, and this table will be updated periodically with additional pharmacogenetic associations supported by sufficient scientific evidence."

Table of Pharmacogenomic Biomarkers in Drug Labeling

The FDA Table of Pharmacogenomic Biomarkers in Drug Labeling lists FDA-approved drugs with pharmacogenomic information found in the drug labeling. "Biomarkers in the table include but are not limited to germline or somatic gene variants (polymorphisms, mutations), functional deficiencies with a genetic etiology, gene expression differences, and chromosomal abnormalities; selected protein biomarkers that are used to select treatments for patients are also included."

PharmGKB

The Pharmacogenomics Knowledgebase (PharmGKB) is an "NIH-funded resource that provides information about how human genetic variation affects response to medications. PharmGKB collects, curates and disseminates knowledge about clinically actionable gene-drug associations and genotype-phenotype relationships."

Commercial Pharmacogenetic Testing Laboratories

There are many commercial laboratories around the world who offer pharmacogenomic testing as a laboratory developed test (LDTs). The tests offered can vary significantly from one lab to another, including genes and alleles tested for, phenotype assignment, and any clinical annotations provided. With the exception of a few direct-to-consumer tests, all pharmacogenetic testing requires an order from an authorized healthcare professional. In order for the results to be used in a clinical setting in the United States, the laboratory performing the test must be CLIA-certified. Other regulations may vary by country and state.

Final consensus terms for allele functional status and phenotype

Direct-to-Consumer Pharmacogenetic Testing

Direct-to-consumer (DTC) pharmacogenetic tests allow consumers to obtain pharmacogenetic testing without an order from a prescriber. DTC pharmacogenetic tests are generally reviewed by the FDA to determine the validity of test claims. The FDA maintains a list of DTC genetic tests that have been approved.

Common Pharmacogenomic-Specific Nomenclature

Genotype

There are multiple ways to represent a pharmacogenomic genotype. A commonly used nomenclature system is to report haplotypes using a star (*) allele (e.g., CYP2C19 *1/*2). Single-nucleotide polymorphisms (SNPs) may be described using their assignment reference SNP cluster ID (rsID) or based on the location of the base pair or amino acid impacted.

Phenotype

In 2017 CPIC published results of an expert survey to standardize terms related to clinical pharmacogenetic test results. Consensus for terms to describe allele functional status, phenotype for drug metabolizing enzymes, phenotype for drug transporters, and phenotype for high-risk genotype status was reached.

Applications

The list below provides a few more commonly known applications of pharmacogenomics:

  • Improve drug safety, and reduce ADRs;
  • Tailor treatments to meet patients' unique genetic pre-disposition, identifying optimal dosing;
  • Improve drug discovery targeted to human disease; and
  • Improve proof of principle for efficacy trials.

Pharmacogenomics may be applied to several areas of medicine, including pain management, cardiology, oncology, and psychiatry. A place may also exist in forensic pathology, in which pharmacogenomics can be used to determine the cause of death in drug-related deaths where no findings emerge using autopsy.

In cancer treatment, pharmacogenomics tests are used to identify which patients are most likely to respond to certain cancer drugs. In behavioral health, pharmacogenomic tests provide tools for physicians and care givers to better manage medication selection and side effect amelioration. Pharmacogenomics is also known as companion diagnostics, meaning tests being bundled with drugs. Examples include KRAS test with cetuximab and EGFR test with gefitinib. Beside efficacy, germline pharmacogenetics can help to identify patients likely to undergo severe toxicities when given cytotoxics showing impaired detoxification in relation with genetic polymorphism, such as canonical 5-FU. In particular, genetic deregulations affecting genes coding for DPD, UGT1A1, TPMT, CDA and CYP2D6 are now considered as critical issues for patients treated with 5-FU/capecitabine, irinotecan, mercaptopurine/azathioprine, gemcitabine/capecitabine/AraC and tamoxifen, respectively.

In cardiovascular disorders, the main concern is response to drugs including warfarin, clopidogrel, beta blockers, and statins. In patients with CYP2C19, who take clopidogrel, cardiovascular risk is elevated, leading to medication package insert updates by regulators. In patients with type 2 diabetes, haptoglobin (Hp) genotyping shows an effect on cardiovascular disease, with Hp2-2 at higher risk and supplemental vitamin E reducing risk by affecting HDL.

In psychiatry, as of 2010, research has focused particularly on 5-HTTLPR and DRD2.

Clinical implementation

Initiatives to spur adoption by clinicians include the Ubiquitous Pharmacogenomics (U-PGx) program in Europe and the Clinical Pharmacogenetics Implementation Consortium (CPIC) in the United States. In a 2017 survey of European clinicians, in the prior year two-thirds had not ordered a pharmacogenetic test.

In 2010, Vanderbilt University Medical Center launched Pharmacogenomic Resource for Enhanced Decisions in Care and Treatment (PREDICT); in 2015 survey, two-thirds of the clinicians had ordered a pharmacogenetic test.

In 2019, the largest private health insurer, UnitedHealthcare, announced that it would pay for genetic testing to predict response to psychiatric drugs.

In 2020, Canada's 4th largest health and dental insurer, Green Shield Canada, announced that it would pay for pharmacogenetic testing and its associated clinical decision support software to optimize and personalize mental health prescriptions.

Reduction of polypharmacy

A potential role for pharmacogenomics is to reduce the occurrence of polypharmacy: it is theorized that with tailored drug treatments, patients will not need to take several medications to treat the same condition. Thus they could potentially reduce the occurrence of adverse drug reactions, improve treatment outcomes, and save costs by avoiding purchase of some medications. For example, maybe due to inappropriate prescribing, psychiatric patients tend to receive more medications than age-matched non-psychiatric patients.

The need for pharmacogenomically tailored drug therapies may be most evident in a survey conducted by the Slone Epidemiology Center at Boston University from February 1998 to April 2007. The study elucidated that an average of 82% of adults in the United States are taking at least one medication (prescription or nonprescription drug, vitamin/mineral, herbal/natural supplement), and 29% are taking five or more. The study suggested that those aged 65 years or older continue to be the biggest consumers of medications, with 17-19% in this age group taking at least ten medications in a given week. Polypharmacy has also shown to have increased since 2000 from 23% to 29%.

Example case studies

Case A – Antipsychotic adverse reaction

Patient A has schizophrenia. Their treatment included a combination of ziprasidone, olanzapine, trazodone and benztropine. The patient experienced dizziness and sedation, so they were tapered off ziprasidone and olanzapine, and transitioned to quetiapine. Trazodone was discontinued. The patient then experienced excessive sweating, tachycardia and neck pain, gained considerable weight and had hallucinations. Five months later, quetiapine was tapered and discontinued, with ziprasidone re-introduced into their treatment, due to the excessive weight gain. Although the patient lost the excessive weight they had gained, they then developed muscle stiffness, cogwheeling, tremors and night sweats. When benztropine was added they experienced blurry vision. After an additional five months, the patient was switched from ziprasidone to aripiprazole. Over the course of 8 months, patient A gradually experienced more weight gain and sedation, and developed difficulty with their gait, stiffness, cogwheeling and dyskinetic ocular movements. A pharmacogenomics test later proved the patient had a CYP2D6 *1/*41, which has a predicted phenotype of IM and CYP2C19 *1/*2 with a predicted phenotype of IM as well.

Case B – Pain Management

Patient B is a woman who gave birth by caesarian section. Her physician prescribed codeine for post-caesarian pain. She took the standard prescribed dose, but she experienced nausea and dizziness while she was taking codeine. She also noticed that her breastfed infant was lethargic and feeding poorly. When the patient mentioned these symptoms to her physician, they recommended that she discontinue codeine use. Within a few days, both the patient's and her infant's symptoms were no longer present. It is assumed that if the patient had undergone a pharmacogenomic test, it would have revealed she may have had a duplication of the gene CYP2D6, placing her in the Ultra-rapid metabolizer (UM) category, explaining her reactions to codeine use.

Case C – FDA Warning on Codeine Overdose for Infants

On February 20, 2013, the FDA released a statement addressing a serious concern regarding the connection between children who are known as CYP2D6 UM, and fatal reactions to codeine following tonsillectomy and/or adenoidectomy (surgery to remove the tonsils and/or adenoids). They released their strongest Boxed Warning to elucidate the dangers of CYP2D6 UMs consuming codeine. Codeine is converted to morphine by CYP2D6, and those who have UM phenotypes are in danger of producing large amounts of morphine due to the increased function of the gene. The morphine can elevate to life-threatening or fatal amounts, as became evident with the death of three children in August 2012.

Challenges

Consecutive phases and associated challenges in Pharmacogenomics.

Although there appears to be a general acceptance of the basic tenet of pharmacogenomics amongst physicians and healthcare professionals, several challenges exist that slow the uptake, implementation, and standardization of pharmacogenomics. Some of the concerns raised by physicians include:

  • Limitation on how to apply the test into clinical practices and treatment;
  • A general feeling of lack of availability of the test;
  • The understanding and interpretation of evidence-based research;
  • Combining test results with other patient data for prescription optimization; and
  • Ethical, legal and social issues.

Issues surrounding the availability of the test include:

  • The lack of availability of scientific data: Although there are a considerable number of drug-metabolizing enzymes involved in the metabolic pathways of drugs, only a fraction have sufficient scientific data to validate their use within a clinical setting; and
  • Demonstrating the cost-effectiveness of pharmacogenomics: Publications for the pharmacoeconomics of pharmacogenomics are scarce, therefore sufficient evidence does not at this time exist to validate the cost-effectiveness and cost-consequences of the test.

Although other factors contribute to the slow progression of pharmacogenomics (such as developing guidelines for clinical use), the above factors appear to be the most prevalent. Increasingly substantial evidence and industry body guidelines for clinical use of pharmacogenetics have made it a population wide approach to precision medicine. Cost, reimbursement, education, and easy use at the point of care remain significant barriers to widescale adoption.

Controversies

Race-based medicine

There has been call to move away from race and ethnicity in medicine and instead use genetic ancestry as a way to categorize patients. Some alleles that vary in frequency between specific populations have been shown to be associated with differential responses to specific drugs. As a result, some disease-specific guidelines only recommend pharmacogenetic testing for populations where high-risk alleles are more common and, similarly, certain insurance companies will only pay for pharmacogenetic testing for beneficiaries of high-risk populations.

Genetic exceptionalism

In the early 2000s, handling genetic information as exceptional, including legal or regulatory protections, garnered strong support. It was argued that genomic information may need special policy and practice protections within the context of electronic health records (EHRs). In 2008, the Genetic Information Nondiscrimination Act (GINA) was enacted to protect patients from health insurance companies discriminating against an individual based on genetic information.

More recently it has been argued that genetic exceptionalism is past its expiration date as we move into a blended genomic/big data era of medicine, yet exceptionalism practices continue to permeate clinical healthcare today. Garrison et al. recently relayed a call to action to update verbiage from genetic exceptionalism to genomic contextualism in that we recognize a fundamental duality of genetic information. This allows room in the argument for different types of genetic information to be handled differently while acknowledging that genomic information is similar and yet distinct from other health-related information. Genomic contextualism would allow for a case-by-case analysis of the technology and the context of its use (e.g., clinical practice, research, secondary findings).

Others argue that genetic information is indeed distinct from other health-related information but not to the extent of requiring legal/regulatory protections, similar to other sensitive health-related data such as HIV status. Additionally, Evans et al. argue that the EHR has sufficient privacy standards to hold other sensitive information such as social security numbers and that the fundamental nature of an EHR is to house highly personal information. Similarly, a systematic review reported that the public had concern over privacy of genetic information, with 60% agreeing that maintaining privacy was not possible; however, 96% agreed that a direct-to-consumer testing company had protected their privacy, with 74% saying their information would be similarly or better protected in an EHR. With increasing technological capabilities in EHRs, it is possible to mask or hide genetic data from subsets of providers and there is not consensus on how, when, or from whom genetic information should be masked. Rigorous protection and masking of genetic information is argued to impede further scientific progress and clinical translation into routine clinical practices.

History

Pharmacogenomics was first recognized by Pythagoras around 510 BC when he made a connection between the dangers of fava bean ingestion with hemolytic anemia and oxidative stress. In the 1950s, this identification was validated and attributed to deficiency of G6PD and is called favism. Although the first official publication was not until 1961, the unofficial beginnings of this science were around the 1950s. Reports of prolonged paralysis and fatal reactions linked to genetic variants in patients who lacked butyrylcholinesterase ('pseudocholinesterase') following succinylcholine injection during anesthesia were first reported in 1956. The term pharmacogenetics was first coined in 1959 by Friedrich Vogel of Heidelberg, Germany (although some papers suggest it was 1957 or 1958). In the late 1960s, twin studies supported the inference of genetic involvement in drug metabolism, with identical twins sharing remarkable similarities in drug response compared to fraternal twins. The term pharmacogenomics first began appearing around the 1990s.

The first FDA approval of a pharmacogenetic test was in 2005 (for alleles in CYP2D6 and CYP2C19)

Future

Computational advances have enabled cheaper and faster sequencing. Research has focused on combinatorial chemistry, genomic mining, omic technologies, and high throughput screening.

As the cost per genetic test decreases, the development of personalized drug therapies will increase. Technology now allows for genetic analysis of hundreds of target genes involved in medication metabolism and response in less than 24 hours for under $1,000. This a huge step towards bringing pharmacogenetic technology into everyday medical decisions. Likewise, companies like deCODE genetics, MD Labs Pharmacogenetics, Navigenics and 23andMe offer genome scans. The companies use the same genotyping chips that are used in GWAS studies and provide customers with a write-up of individual risk for various traits and diseases and testing for 500,000 known SNPs. Costs range from $995 to $2500 and include updates with new data from studies as they become available. The more expensive packages even included a telephone session with a genetics counselor to discuss the results.

Ethics

Pharmacogenetics has become a controversial issue in the area of bioethics. Privacy and confidentiality are major concerns. The evidence of benefit or risk from a genetic test may only be suggestive, which could cause dilemmas for providers. Drug development may be affected, with rare genetic variants possibly receiving less research. Access and patient autonomy are also open to discussion.

Recombination (cosmology)

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Recombination_(cosmology)

In cosmology, recombination refers to the epoch during which charged electrons and protons first became bound to form electrically neutral hydrogen atoms. Recombination occurred about 378000 years after the Big Bang (at a redshift of z = 1100). The word "recombination" is misleading, since the Big Bang theory does not posit that protons and electrons had been combined before, but the name exists for historical reasons since it was named before the Big Bang hypothesis became the primary theory of the birth of the universe.

Overview

Immediately after the Big Bang, the universe was a hot, dense plasma of photons, leptons, and quarks: the quark epoch. At 10−6 seconds, the Universe had expanded and cooled sufficiently to allow for the formation of protons: the hadron epoch. This plasma was effectively opaque to electromagnetic radiation due to Thomson scattering by free electrons, as the mean free path each photon could travel before encountering an electron was very short. This is the current state of the interior of the Sun. As the universe expanded, it also cooled. Eventually, the universe cooled to the point that the radiation field could not immediately ionize neutral hydrogen, and atoms became energetically favored. The fraction of free electrons and protons as compared to neutral hydrogen decreased to a few parts in 10000.

Recombination involves electrons binding to protons (hydrogen nuclei) to form neutral hydrogen atoms. Because direct recombinations to the ground state (lowest energy) of hydrogen are very inefficient, these hydrogen atoms generally form with the electrons in a high energy state, and the electrons quickly transition to their low energy state by emitting photons. Two main pathways exist: from the 2p state by emitting a Lyman-a photon – these photons will almost always be reabsorbed by another hydrogen atom in its ground state – or from the 2s state by emitting two photons, which is very slow.

This production of photons is known as decoupling, which leads to recombination sometimes being called photon decoupling, but recombination and photon decoupling are distinct events. Once photons decoupled from matter, they traveled freely through the universe without interacting with matter and constitute what is observed today as cosmic microwave background radiation (in that sense, the cosmic background radiation is infrared and some red black-body radiation emitted when the universe was at a temperature of some 3000 K, redshifted by a factor of 1100 from the visible spectrum to the microwave spectrum).

Recombination time frames

The time frame for recombination can be estimated from the time dependence of the temperature of the cosmic microwave background (CMB). The microwave background is a blackbody spectrum representing the photons present at recombination, shifted in energy by the expansion of the universe. A blackbody is completely characterized by its temperature; the shift is called the redshift denoted by z: where 2.7 K is today's temperature.

The thermal energy at the peak of the blackbody spectrum is the Boltzmann constant, kB, times the temperature, but simply comparing this to the ionization energy of hydrogen atoms will not consider the spectrum of energies. A better estimate evaluates the thermal equilibrium between matter (atoms) and radiation. The density of photons, with energy E sufficient to ionize hydrogen is the total density times a factor from the equilibrium Boltzmann distribution: At equilibrium this will approximately equal the matter (baryon) density. The ratio of photons to baryons, , is known from several sources including measurements by the Planck satellite to be around 109. Solving for gives value around 1100, which converts to a cosmic time value around 400,000 years.

Recombination history of hydrogen

The cosmic ionization history is generally described in terms of the free electron fraction xe as a function of redshift. It is the ratio of the abundance of free electrons to the total abundance of hydrogen (both neutral and ionized). Denoting by ne the number density of free electrons, nH that of atomic hydrogen and np that of ionized hydrogen (i.e. protons), xe is defined as

Since hydrogen only recombines once helium is fully neutral, charge neutrality implies ne = np, i.e. xe is also the fraction of ionized hydrogen.

Rough estimate from equilibrium theory

It is possible to find a rough estimate of the redshift of the recombination epoch assuming the recombination reaction is fast enough that it proceeds near thermal equilibrium. The relative abundance of free electrons, protons and neutral hydrogen is then given by the Saha equation:

where me is the mass of the electron, kB is the Boltzmann constant, T is the temperature, ħ is the reduced Planck constant, and EI = 13.6 eV is the ionization energy of hydrogen. Charge neutrality requires ne = np, and the Saha equation can be rewritten in terms of the free electron fraction xe:

All quantities in the right-hand side are known functions of z, the redshift: the temperature is given by T = (1 + z) × 2.728 K, and the total density of hydrogen (neutral and ionized) is given by np + nH = (1 + z)3 × 1.6 m−3.

Solving this equation for a 50 percent ionization fraction yields a recombination temperature of roughly 4000 K, corresponding to redshift z = 1500.

Effective three-level atom

In 1968, physicists Jim Peebles in the US and Yakov Borisovich Zel'dovich and collaborators in the USSR independently computed the non-equilibrium recombination history of hydrogen. The basic elements of the model are the following.

  • Direct recombinations to the ground state of hydrogen are very inefficient: each such event leads to a photon with energy greater than 13.6 eV, which almost immediately re-ionizes a neighboring hydrogen atom.
  • Electrons therefore only efficiently recombine to the excited states of hydrogen, from which they cascade very quickly down to the first excited state, with principal quantum number n = 2.
  • From the first excited state, electrons can reach the ground state n = 1 through two pathways:
    • Decay from the 2p state by emitting a Lyman-α photon. This photon will almost always be reabsorbed by another hydrogen atom in its ground state. However, cosmological redshifting systematically decreases the photon frequency, and there is a small chance that it escapes reabsorption if it gets redshifted far enough from the Lyman-α line resonant frequency before encountering another hydrogen atom.
    • Decay from the 2s state by emitting two photons. This two-photon decay process is very slow, with a rate of 8.22 s−1. It is however competitive with the slow rate of Lyman-α escape in producing ground-state hydrogen.
  • Atoms in the first excited state may also be re-ionized by the ambient CMB photons before they reach the ground state. When this is the case, it is as if the recombination to the excited state did not happen in the first place. To account for this possibility, Peebles defines the factor C as the probability that an atom in the first excited state reaches the ground state through either of the two pathways described above before being photoionized.

This model is usually described as an "effective three-level atom" as it requires keeping track of hydrogen under three forms: in its ground state, in its first excited state (assuming all the higher excited states are in Boltzmann equilibrium with it), and in its ionized state.

Accounting for these processes, the recombination history is then described by the differential equation

where αB is the "case B" recombination coefficient to the excited states of hydrogen, βB is the corresponding photoionization rate and E21 = 10.2 eV is the energy of the first excited state. Note that the second term in the right-hand side of the above equation can be obtained by a detailed balance argument. The equilibrium result given in the previous section would be recovered by setting the left-hand side to zero, i.e. assuming that the net rates of recombination and photoionization are large in comparison to the Hubble expansion rate, which sets the overall evolution timescale for the temperature and density. However, C αB np is comparable to the Hubble expansion rate, and even gets significantly lower at low redshifts, leading to an evolution of the free electron fraction much slower than what one would obtain from the Saha equilibrium calculation. With modern values of cosmological parameters, one finds that the universe is 90% neutral at z ≈ 1070.

Modern developments

The simple effective three-level atom model described above accounts for the most important physical processes. However it does rely on approximations that lead to errors on the predicted recombination history at the level of 10% or so. Due to the importance of recombination for the precise prediction of cosmic microwave background anisotropies, several research groups have revisited the details of this picture over the last two decades.

The refinements to the theory can be divided into two categories:

  • Accounting for the non-equilibrium populations of the highly excited states of hydrogen. This effectively amounts to modifying the recombination coefficient αB.
  • Accurately computing the rate of Lyman-α escape and the effect of these photons on the 2s–1s transition. This requires solving a time-dependent radiative transfer equation. In addition, one needs to account for higher-order Lyman transitions. These refinements effectively amount to a modification of Peebles' C factor.

Modern recombination theory is believed to be accurate at the level of 0.1%, and is implemented in publicly available fast recombination codes.

Primordial helium recombination

Helium nuclei are produced during Big Bang nucleosynthesis, and make up about 24% of the total mass of baryonic matter. The ionization energy of helium is larger than that of hydrogen and it therefore recombines earlier. Because neutral helium carries two electrons, its recombination proceeds in two steps. The first recombination, proceeds near Saha equilibrium and takes place around redshift z ≈ 6000. The second recombination, , is slower than what would be predicted from Saha equilibrium and takes place around redshift z ≈ 2000. The details of helium recombination are less critical than those of hydrogen recombination for the prediction of cosmic microwave background anisotropies, since the universe is still very optically thick after helium has recombined and before hydrogen has started its recombination.

Primordial light barrier

Prior to recombination, photons were not able to freely travel through the universe, as they constantly scattered off the free electrons and protons. This scattering causes a loss of information, and "there is therefore a photon barrier at a redshift" near that of recombination that prevents us from using photons directly to learn about the universe at larger redshifts. Once recombination had occurred, however, the mean free path of photons greatly increased due to the lower number of free electrons. Shortly after recombination, the photon mean free path became larger than the Hubble length, and photons traveled freely without interacting with matter. For this reason, recombination is closely associated with the last scattering surface, which is the name for the last time at which the photons in the cosmic microwave background interacted with matter. However, these two events are distinct, and in a universe with different values for the baryon-to-photon ratio and matter density, recombination and photon decoupling need not have occurred at the same epoch.

Astrochemistry

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Astrochemistry
Infographic showing the theorized origin of the chemical elements that make up the human body

Astrochemistry is the study of the abundance and reactions of molecules in the universe, and their interaction with radiation. The discipline is an overlap of astronomy and chemistry. The word "astrochemistry" may be applied to both the Solar System and the interstellar medium. The study of the abundance of elements and isotope ratios in Solar System objects, such as meteorites, is also called cosmochemistry, while the study of interstellar atoms and molecules and their interaction with radiation is sometimes called molecular astrophysics. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds is of special interest, because it is from these clouds that solar systems form.

History

As an offshoot of the disciplines of astronomy and chemistry, the history of astrochemistry is founded upon the shared history of the two fields. The development of advanced observational and experimental spectroscopy has allowed for the detection of an ever-increasing array of molecules within solar systems and the surrounding interstellar medium. In turn, the increasing number of chemicals discovered by advancements in spectroscopy and other technologies have increased the size and scale of the chemical space available for astrochemical study.

History of spectroscopy

Observations of solar spectra as performed by Athanasius Kircher (1646), Jan Marek Marci (1648), Robert Boyle (1664), and Francesco Maria Grimaldi (1665) all predated Newton's 1666 work which established the spectral nature of light and resulted in the first spectroscope. Spectroscopy was first used as an astronomical technique in 1802 with the experiments of William Hyde Wollaston, who built a spectrometer to observe the spectral lines present within solar radiation. These spectral lines were later quantified through the work of Joseph von Fraunhofer.

Spectroscopy was first used to distinguish between different materials after the release of Charles Wheatstone's 1835 report that the sparks given off by different metals have distinct emission spectra. This observation was later built upon by Léon Foucault, who demonstrated in 1849 that identical absorption and emission lines result from the same material at different temperatures. An equivalent statement was independently postulated by Anders Jonas Ångström in his 1853 work Optiska Undersökningar, where it was theorized that luminous gases emit rays of light at the same frequencies as light which they may absorb.

This spectroscopic data began to take upon theoretical importance with Johann Balmer's observation that the spectral lines exhibited by samples of hydrogen followed a simple empirical relationship which came to be known as the Balmer Series. This series, a special case of the more general Rydberg Formula developed by Johannes Rydberg in 1888, was created to describe the spectral lines observed for hydrogen. Rydberg's work expanded upon this formula by allowing for the calculation of spectral lines for multiple different chemical elements. The theoretical importance granted to these spectroscopic results was greatly expanded upon the development of quantum mechanics, as the theory allowed for these results to be compared to atomic and molecular emission spectra which had been calculated a priori.

History of astrochemistry

While radio astronomy was developed in the 1930s, it was not until 1937 that any substantial evidence arose for the conclusive identification of an interstellar molecule – up until this point, the only chemical species known to exist in interstellar space were atomic. These findings were confirmed in 1940, when McKellar et al. identified and attributed spectroscopic lines in an as-of-then unidentified radio observation to CH and CN molecules in interstellar space. In the thirty years afterwards, a small selection of other molecules were discovered in interstellar space: the most important being OH, discovered in 1963 and significant as a source of interstellar oxygen, and H2CO (formaldehyde), discovered in 1969 and significant for being the first observed organic, polyatomic molecule in interstellar space.

The discovery of interstellar formaldehyde – and later, other molecules with potential biological significance, such as water or carbon monoxide – is seen by some as strong supporting evidence for abiogenetic theories of life: specifically, theories which hold that the basic molecular components of life came from extraterrestrial sources. This has prompted a still ongoing search for interstellar molecules which are either of direct biological importance – such as interstellar glycine, discovered in a comet within our solar system in 2009 – or which exhibit biologically relevant properties like chirality – an example of which (propylene oxide) was discovered in 2016 – alongside more basic astrochemical research.

Spectroscopy

One particularly important experimental tool in astrochemistry is spectroscopy through the use of telescopes to measure the absorption and emission of light from molecules and atoms in various environments. By comparing astronomical observations with laboratory measurements, astrochemists can infer the elemental abundances, chemical composition, and temperatures of stars and interstellar clouds. This is possible because ions, atoms, and molecules have characteristic spectra: that is, the absorption and emission of certain wavelengths (colors) of light, often not visible to the human eye. However, these measurements have limitations, with various types of radiation (radio, infrared, visible, ultraviolet etc.) able to detect only certain types of species, depending on the chemical properties of the molecules. Interstellar formaldehyde was the first organic molecule detected in the interstellar medium.

Perhaps the most powerful technique for detection of individual chemical species is radio astronomy, which has resulted in the detection of over a hundred interstellar species, including radicals and ions, and organic (i.e. carbon-based) compounds, such as alcohols, acids, aldehydes, and ketones. One of the most abundant interstellar molecules, and among the easiest to detect with radio waves (due to its strong electric dipole moment), is CO (carbon monoxide). In fact, CO is such a common interstellar molecule that it is used to map out molecular regions. The radio observation of perhaps greatest human interest is the claim of interstellar glycine, the simplest amino acid, but with considerable accompanying controversy. One of the reasons why this detection was controversial is that although radio (and some other methods like rotational spectroscopy) are good for the identification of simple species with large dipole moments, they are less sensitive to more complex molecules, even something relatively small like amino acids.

Moreover, such methods are completely blind to molecules that have no dipole. For example, by far the most common molecule in the universe is H2 (hydrogen gas, or chemically better said dihydrogen), but it does not have a dipole moment, so it is invisible to radio telescopes. Moreover, such methods cannot detect species that are not in the gas-phase. Since dense molecular clouds are very cold (10 to 50 K [−263.1 to −223.2 °C; −441.7 to −369.7 °F]), most molecules in them (other than dihydrogen) are frozen, i.e. solid. Instead, dihydrogen and these other molecules are detected using other wavelengths of light. Dihydrogen is easily detected in the ultraviolet (UV) and visible ranges from its absorption and emission of light (the hydrogen line). Moreover, most organic compounds absorb and emit light in the infrared (IR) so, for example, the detection of methane in the atmosphere of Mars was achieved using an IR ground-based telescope, NASA's 3-meter Infrared Telescope Facility atop Mauna Kea, Hawaii. NASA's researchers use airborne IR telescope SOFIA and space telescope Spitzer for their observations, researches and scientific operations. Somewhat related to the recent detection of methane in the atmosphere of Mars. Christopher Oze, of the University of Canterbury in New Zealand and his colleagues reported, in June 2012, that measuring the ratio of dihydrogen and methane levels on Mars may help determine the likelihood of life on Mars. According to the scientists, "...low H2/CH4 ratios (less than approximately 40) indicate that life is likely present and active." Other scientists have recently reported methods of detecting dihydrogen and methane in extraterrestrial atmospheres.

Infrared astronomy has also revealed that the interstellar medium contains a suite of complex gas-phase carbon compounds called polyaromatic hydrocarbons, often abbreviated PAHs or PACs. These molecules, composed primarily of fused rings of carbon (either neutral or in an ionized state), are said to be the most common class of carbon compound in the Galaxy. They are also the most common class of carbon molecule in meteorites and in cometary and asteroidal dust (cosmic dust). These compounds, as well as the amino acids, nucleobases, and many other compounds in meteorites, carry deuterium and isotopes of carbon, nitrogen, and oxygen that are very rare on Earth, attesting to their extraterrestrial origin. The PAHs are thought to form in hot circumstellar environments (around dying, carbon-rich red giant stars).

Infrared astronomy has also been used to assess the composition of solid materials in the interstellar medium, including silicates, kerogen-like carbon-rich solids, and ices. This is because unlike visible light, which is scattered or absorbed by solid particles, the IR radiation can pass through the microscopic interstellar particles, but in the process there are absorptions at certain wavelengths that are characteristic of the composition of the grains. As above with radio astronomy, there are certain limitations, e.g. N2 is difficult to detect by either IR or radio astronomy.

Such IR observations have determined that in dense clouds (where there are enough particles to attenuate the destructive UV radiation) thin ice layers coat the microscopic particles, permitting some low-temperature chemistry to occur. Since dihydrogen is by far the most abundant molecule in the universe, the initial chemistry of these ices is determined by the chemistry of the hydrogen. If the hydrogen is atomic, then the H atoms react with available O, C and N atoms, producing "reduced" species like H2O, CH4, and NH3. However, if the hydrogen is molecular and thus not reactive, this permits the heavier atoms to react or remain bonded together, producing CO, CO2, CN, etc. These mixed-molecular ices are exposed to ultraviolet radiation and cosmic rays, which results in complex radiation-driven chemistry. Lab experiments on the photochemistry of simple interstellar ices have produced amino acids. The similarity between interstellar and cometary ices (as well as comparisons of gas phase compounds) have been invoked as indicators of a connection between interstellar and cometary chemistry. This is somewhat supported by the results of the analysis of the organics from the comet samples returned by the Stardust mission but the minerals also indicated a surprising contribution from high-temperature chemistry in the solar nebula.

Research

Transition from atomic to molecular gas at the border of the Orion molecular cloud

Research is progressing on the way in which interstellar and circumstellar molecules form and interact, e.g. by including non-trivial quantum mechanical phenomena for synthesis pathways on interstellar particles. This research could have a profound impact on our understanding of the suite of molecules that were present in the molecular cloud when our solar system formed, which contributed to the rich carbon chemistry of comets and asteroids and hence the meteorites and interstellar dust particles which fall to the Earth by the ton every day.

The sparseness of interstellar and interplanetary space results in some unusual chemistry, since symmetry-forbidden reactions cannot occur except on the longest of timescales. For this reason, molecules and molecular ions which are unstable on Earth can be highly abundant in space, for example the H3+ ion.

Astrochemistry overlaps with astrophysics and nuclear physics in characterizing the nuclear reactions which occur in stars, as well as the structure of stellar interiors. If a star develops a largely convective envelope, dredge-up events can occur, bringing the products of nuclear burning to the surface. If the star is experiencing significant mass loss, the expelled material may contain molecules whose rotational and vibrational spectral transitions can be observed with radio and infrared telescopes. An interesting example of this is the set of carbon stars with silicate and water-ice outer envelopes. Molecular spectroscopy allows us to see these stars transitioning from an original composition in which oxygen was more abundant than carbon, to a carbon star phase where the carbon produced by helium burning is brought to the surface by deep convection, and dramatically changes the molecular content of the stellar wind.

In October 2011, scientists reported that cosmic dust contains organic matter ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars.

On August 29, 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation.

In September, 2012, NASA scientists reported that polycyclic aromatic hydrocarbons (PAHs), subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation, oxygenation and hydroxylation, to more complex organics – "a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". Further, as a result of these transformations, the PAHs lose their spectroscopic signature which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks."

In February 2014, NASA announced the creation of an improved spectral database  for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.

On August 11, 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN, HNC, H2CO, and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON).

For the study of the recourses of chemical elements and molecules in the universe is developed the mathematical model of the molecules composition distribution in the interstellar environment on thermodynamic potentials by professor M.Yu. Dolomatov using methods of the probability theory, the mathematical and physical statistics and the equilibrium thermodynamics. Based on this model are estimated the resources of life-related molecules, amino acids and the nitrogenous bases in the interstellar medium. The possibility of the oil hydrocarbons molecules formation is shown. The given calculations confirm Sokolov's and Hoyl's hypotheses about the possibility of the oil hydrocarbons formation in Space. Results are confirmed by data of astrophysical supervision and space researches.

In July 2015, scientists reported that upon the first touchdown of the Philae lander on comet 67/P's surface, measurements by the COSAC and Ptolemy instruments revealed sixteen organic compounds, four of which were seen for the first time on a comet, including acetamide, acetone, methyl isocyanate and propionaldehyde.

In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life."

Pharmacogenomics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Pharmacogenomics Pharmacogenomics ,...