Search This Blog

Sunday, October 29, 2023

Fundamental theorem of arithmetic

From Wikipedia, the free encyclopedia
In Disquisitiones Arithmeticae (1801) Gauss proved the unique factorization theorem and used it to prove the law of quadratic reciprocity.

In mathematics, the fundamental theorem of arithmetic, also called the unique factorization theorem and prime factorization theorem, states that every integer greater than 1 can be represented uniquely as a product of prime numbers, up to the order of the factors. For example,

The theorem says two things about this example: first, that 1200 can be represented as a product of primes, and second, that no matter how this is done, there will always be exactly four 2s, one 3, two 5s, and no other primes in the product.

The requirement that the factors be prime is necessary: factorizations containing composite numbers may not be unique (for example, ).

This theorem is one of the main reasons why 1 is not considered a prime number: if 1 were prime, then factorization into primes would not be unique; for example,

The theorem generalizes to other algebraic structures that are called unique factorization domains and include principal ideal domains, Euclidean domains, and polynomial rings over a field. However, the theorem does not hold for algebraic integers. This failure of unique factorization is one of the reasons for the difficulty of the proof of Fermat's Last Theorem. The implicit use of unique factorization in rings of algebraic integers is behind the error of many of the numerous false proofs that have been written during the 358 years between Fermat's statement and Wiles's proof.

History

The fundamental theorem can be derived from Book VII, propositions 30, 31 and 32, and Book IX, proposition 14 of Euclid's Elements.

If two numbers by multiplying one another make some number, and any prime number measure the product, it will also measure one of the original numbers.

— Euclid, Elements Book VII, Proposition 30

(In modern terminology: if a prime p divides the product ab, then p divides either a or b or both.) Proposition 30 is referred to as Euclid's lemma, and it is the key in the proof of the fundamental theorem of arithmetic.

Any composite number is measured by some prime number.

— Euclid, Elements Book VII, Proposition 31

(In modern terminology: every integer greater than one is divided evenly by some prime number.) Proposition 31 is proved directly by infinite descent.

Any number either is prime or is measured by some prime number.

— Euclid, Elements Book VII, Proposition 32

Proposition 32 is derived from proposition 31, and proves that the decomposition is possible.

If a number be the least that is measured by prime numbers, it will not be measured by any other prime number except those originally measuring it.

— Euclid, Elements Book IX, Proposition 14

(In modern terminology: a least common multiple of several prime numbers is not a multiple of any other prime number.) Book IX, proposition 14 is derived from Book VII, proposition 30, and proves partially that the decomposition is unique – a point critically noted by André Weil. Indeed, in this proposition the exponents are all equal to one, so nothing is said for the general case.

While Euclid took the first step on the way to the existence of prime factorization, Kamāl al-Dīn al-Fārisī took the final step and stated for the first time the fundamental theorem of arithmetic.

Article 16 of Gauss' Disquisitiones Arithmeticae is an early modern statement and proof employing modular arithmetic.

Applications

Canonical representation of a positive integer

Every positive integer n > 1 can be represented in exactly one way as a product of prime powers

where p1 < p2 < ... < pk are primes and the ni are positive integers. This representation is commonly extended to all positive integers, including 1, by the convention that the empty product is equal to 1 (the empty product corresponds to k = 0).

This representation is called the canonical representation of n, or the standard form of n. For example,

999 = 33×37,
1000 = 23×53,
1001 = 7×11×13.

Factors p0 = 1 may be inserted without changing the value of n (for example, 1000 = 23×30×53). In fact, any positive integer can be uniquely represented as an infinite product taken over all the positive prime numbers, as

where a finite number of the ni are positive integers, and the others are zero.

Allowing negative exponents provides a canonical form for positive rational numbers.

Arithmetic operations

The canonical representations of the product, greatest common divisor (GCD), and least common multiple (LCM) of two numbers a and b can be expressed simply in terms of the canonical representations of a and b themselves:

However, integer factorization, especially of large numbers, is much more difficult than computing products, GCDs, or LCMs. So these formulas have limited use in practice.

Arithmetic functions

Many arithmetic functions are defined using the canonical representation. In particular, the values of additive and multiplicative functions are determined by their values on the powers of prime numbers.

Proof

The proof uses Euclid's lemma (Elements VII, 30): If a prime divides the product of two integers, then it must divide at least one of these integers.

Existence

It must be shown that every integer greater than 1 is either prime or a product of primes. First, 2 is prime. Then, by strong induction, assume this is true for all numbers greater than 1 and less than n. If n is prime, there is nothing more to prove. Otherwise, there are integers a and b, where n = a b, and 1 < ab < n. By the induction hypothesis, a = p1 p2 ⋅⋅⋅ pj and b = q1 q2 ⋅⋅⋅ qk are products of primes. But then n = a b = p1 p2 ⋅⋅⋅ pj q1 q2 ⋅⋅⋅ qk is a product of primes.

Uniqueness

Suppose, to the contrary, there is an integer that has two distinct prime factorizations. Let n be the least such integer and write n = p1 p2 ... pj = q1 q2 ... qk, where each pi and qi is prime. We see that p1 divides q1 q2 ... qk, so p1 divides some qi by Euclid's lemma. Without loss of generality, say p1 divides q1. Since p1 and q1 are both prime, it follows that p1 = q1. Returning to our factorizations of n, we may cancel these two factors to conclude that p2 ... pj = q2 ... qk. We now have two distinct prime factorizations of some integer strictly smaller than n, which contradicts the minimality of n.

Uniqueness without Euclid's lemma

The fundamental theorem of arithmetic can also be proved without using Euclid's lemma. The proof that follows is inspired by Euclid's original version of the Euclidean algorithm.

Assume that is the smallest positive integer which is the product of prime numbers in two different ways. Incidentally, this implies that , if it exists, must be a composite number greater than . Now, say

Every must be distinct from every Otherwise, if say then there would exist some positive integer that is smaller than s and has two distinct prime factorizations. One may also suppose that by exchanging the two factorizations, if needed.

Setting and one has Also, since one has It then follows that

As the positive integers less than s have been supposed to have a unique prime factorization, must occur in the factorization of either or Q. The latter case is impossible, as Q, being smaller than s, must have a unique prime factorization, and differs from every The former case is also impossible, as, if is a divisor of it must be also a divisor of which is impossible as and are distinct primes.

Therefore, there cannot exist a smallest integer with more than a single distinct prime factorization. Every positive integer must either be a prime number itself, which would factor uniquely, or a composite that also factors uniquely into primes, or in the case of the integer , not factor into any prime.

Generalizations

The first generalization of the theorem is found in Gauss's second monograph (1832) on biquadratic reciprocity. This paper introduced what is now called the ring of Gaussian integers, the set of all complex numbers a + bi where a and b are integers. It is now denoted by He showed that this ring has the four units ±1 and ±i, that the non-zero, non-unit numbers fall into two classes, primes and composites, and that (except for order), the composites have unique factorization as a product of primes (up to the order and multiplication by units).

Similarly, in 1844 while working on cubic reciprocity, Eisenstein introduced the ring , where   is a cube root of unity. This is the ring of Eisenstein integers, and he proved it has the six units and that it has unique factorization.

However, it was also discovered that unique factorization does not always hold. An example is given by . In this ring one has

Examples like this caused the notion of "prime" to be modified. In it can be proven that if any of the factors above can be represented as a product, for example, 2 = ab, then one of a or b must be a unit. This is the traditional definition of "prime". It can also be proven that none of these factors obeys Euclid's lemma; for example, 2 divides neither (1 + −5) nor (1 − −5) even though it divides their product 6. In algebraic number theory 2 is called irreducible in (only divisible by itself or a unit) but not prime in (if it divides a product it must divide one of the factors). The mention of is required because 2 is prime and irreducible in Using these definitions it can be proven that in any integral domain a prime must be irreducible. Euclid's classical lemma can be rephrased as "in the ring of integers every irreducible is prime". This is also true in and but not in

The rings in which factorization into irreducibles is essentially unique are called unique factorization domains. Important examples are polynomial rings over the integers or over a field, Euclidean domains and principal ideal domains.

In 1843 Kummer introduced the concept of ideal number, which was developed further by Dedekind (1876) into the modern theory of ideals, special subsets of rings. Multiplication is defined for ideals, and the rings in which they have unique factorization are called Dedekind domains.

There is a version of unique factorization for ordinals, though it requires some additional conditions to ensure uniqueness.

Primordial black hole

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Primordial_black_hole
Formation of the universe without (above) and with (below) primordial black holes

In cosmology, primordial black holes (PBHs) are hypothetical black holes that formed soon after the Big Bang. In the inflationary era and early radiation-dominated universe, extremely dense pockets of subatomic matter may have been tightly packed to the point of gravitational collapse, creating primordial black holes without the supernova compression needed to make black holes today. Because the creation of primordial black holes would pre-date the first stars, they are not limited to the narrow mass range of stellar black holes.

Yakov Borisovich Zel'dovich and Igor Dmitriyevich Novikov in 1966 first proposed the existence of such black holes, while the first in-depth study was conducted by Stephen Hawking in 1971. However, their existence has not been proven and remains hypothetical. In September 2022, primordial black holes were proposed by some researchers to explain the unexpected very large early galaxies discovered by the James Webb Space Telescope (JWST).

PBHs have long been considered possibly important if not nearly exclusive components of dark matter, the latter perspective having been strengthened by both LIGO/Virgo interferometer gravitational wave and JWST observations. Early constraints on PBHs as dark matter usually assumed most black holes would have similar or identical ("monochromatic") mass, which was disproven by LIGO/Virgo results, and further suggestions that the actual black hole mass distribution is broadly platykurtic were evident from JWST observations of early large galaxies.

History

Depending on the model, primordial black holes could have initial masses ranging from 10−8 kg (the so-called Planck relics) to more than thousands of solar masses. However, primordial black holes originally having mass lower than 1011 kg would not have survived to the present due to Hawking radiation, which causes complete evaporation in a time much shorter than the age of the Universe. Primordial black holes are non-baryonic, and as such are plausible dark matter candidates. Primordial black holes are also good candidates for being the seeds of the supermassive black holes at the center of massive galaxies, as well as of intermediate-mass black holes.

Primordial black holes belong to the class of massive compact halo objects (MACHOs). They are naturally a good dark matter candidate: they are (nearly) collision-less and stable (if sufficiently massive), they have non-relativistic velocities, and they form very early in the history of the Universe (typically less than one second after the Big Bang). Nevertheless, critics maintain that tight limits on their abundance have been set up from various astrophysical and cosmological observations, which would exclude that they contribute significantly to dark matter over most of the plausible mass range. However, new research has provided for the possibility again, whereby these black holes would sit in clusters with a 30-solar-mass primordial black hole at the center.

In March 2016, one month after the announcement of the detection by Advanced LIGO/VIRGO of gravitational waves emitted by the merging of two 30 solar mass black holes (about 6×1031 kg), three groups of researchers proposed independently that the detected black holes had a primordial origin. Two of the groups found that the merging rates inferred by LIGO are consistent with a scenario in which all the dark matter is made of primordial black holes, if a non-negligible fraction of them are somehow clustered within halos such as faint dwarf galaxies or globular clusters, as expected by the standard theory of cosmic structure formation. The third group claimed that these merging rates are incompatible with an all-dark-matter scenario and that primordial black holes could only contribute to less than one percent of the total dark matter. The unexpected large mass of the black holes detected by LIGO has strongly revived interest in primordial black holes with masses in the range of 1 to 100 solar masses. It is still debated whether this range is excluded or not by other observations, such as the absence of micro-lensing of stars, the cosmic microwave background anisotropies, the size of faint dwarf galaxies, and the absence of correlation between X-ray and radio sources towards the galactic center.

In May 2016, Alexander Kashlinsky suggested that the observed spatial correlations in the unresolved gamma-ray and X-ray background radiations could be due to primordial black holes with similar masses, if their abundance is comparable to that of dark matter.

In August 2019, a study was published opening up the possibility of making up all dark matter with asteroid-mass primordial black holes (3.5 × 10−17 – 4 × 10−12 solar masses, or 7 × 1013 – 8 × 1018 kg).

In September 2019, a report by James Unwin and Jakub Scholtz proposed the possibility of a primordial black hole (PBH) with mass 5–15 M🜨 (Earth masses), about the diameter of a tennis ball, existing in the extended Kuiper Belt to explain the orbital anomalies that are theorized to be the result of a 9th planet in the solar system.

In October 2019, Derek Inman and Yacine Ali-Haïmoud published an article in which they discovered that the nonlinear velocities  which arise from the structure formation are too small to significantly affect the constraints that arise from CMB anisotropies.

In September 2021, the NANOGrav collaboration announced that they had found a low-frequency signal that could be attributed to gravitational waves potentially be associated with PBHs. But so far they haven't been confirmed as a gravitational wave signal

In September 2022, primordial black holes were used to explain the unexpected very large early (high redshift) galaxies discovered by the James Webb Space Telescope.

Formation

Primordial black holes were possibly formed by the collapse of overdense regions in the inflationary or early radiation-dominated universe.

Primordial black holes could have formed in the very early Universe (less than one second after the Big Bang) during the inflationary era, or in the very early radiation-dominated era. The essential ingredient for the formation of a primordial black hole is a fluctuation in the density of the Universe, inducing its gravitational collapse. One typically requires density contrasts (where is the density of the Universe) to form a black hole.

Production mechanisms

There are several mechanisms able to produce such inhomogeneities in the context of cosmic inflation (in hybrid inflation models.) Some examples include:

Axion Inflation

Axion inflation is a theoretical model in which the axion acts as an inflaton field and because of the time period its created at, the field is oscillating at its minimal potential energy, these oscillations are responsible for the energy density fluctuations in the early universe.

Reheating

Reheating is the transitory process between the inflationary and hot, dense, radiation-dominated period. During this time the inflaton field decays into other particles and these particles begin to interact in order to reach thermal equillibrium. However, if this process is incomplete it creates density fluctuations and if these are big enough they could be responsible for the formation of PBH.

Cosmological phase transitions

Cosmological phase transitions may cause inhomogeneities in different ways depending on the specific details of each transition. For example, one mechanism is concerned with the collapse of overdense regions that arise from these phase transitions, while another mechanism involves highly energetic particles that are produced in these phase transitions and then go through gravitational collapse forming PBH’s.

Implications

Dark matter problem

The dark matter problem, proposed in 1933 by Swiss-American astronomer Fritz Zwicky, refers to the fact that scientists still don't know what form dark matter takes. PBH can solve that in a few ways. First, if PBH's accounted for all or a significant amount of the dark matter in the universe, this could explain the gravitational effects seen in galaxies and galactic clusters. Secondly, PBH's have different proposed production mechanisms unlike WIMPs, they can emit gravitational waves that interact with regular matter. Finally, the discovery of PBH's could explain some of the observed gravitational lensing effects that couldn't arise from ordinary matter. While evidence that primordial black holes may constitute dark matter is inconclusive as of 2023, researchers such as Bernard Carr and others are strong proponents.

Galaxy formation

Since primordial black holes do not necessarily have to be small (they can have any size), they may have contributed to formation of galaxies, such as those earlier than expected.

Cosmological domain wall problem

The cosmological domain wall problem, proposed in 1974 by Soviet physicist Yakov Zeldovich, discussed the formation of domain walls during phase transitions of the early universe and what could arise from their large energy densities. PBH's could serve as a solution to this problem in various ways. One explanation could be that PBH's can prevent the formation of domain walls due to them exerting gravitational forces on the surrounding matter making it clump and theoretically preventing the formation of said walls. Another explanation could be that PBH's could decay domain walls; if these were formed in the early universe before PBH's then due to gravitational interactions these could eventually collapse into PBH's. Finally, a third explanation could be that PBH's do not violate the observational constraints; if PBH's in the 1015-1016 gram mass range were to be detected then these would have the right density to make up all dark matter in the universe without violating constraints, thus the domain wall problem wouldn't arise.

Cosmological monopole problem

The Cosmological monopole problem, also proposed by Yakov Zeldovich in the late 1970's, consisted of the absence of magnetic monopoles nowadays. PBH's can also serve as a solution to this problem. To start, if magnetic monopoles did exist in the early universe these could have gravitationally interacted with PBH's and been absorbed thus explaining their absence. Another explanation due to PBH's could be that PBH's would have exerted gravitational forces on matter causing it to clump and dilute the density of magnetic monopoles.

String theory

General relativity predicts the smallest primordial black holes would have evaporated by now, but if there were a fourth spatial dimension – as predicted by string theory – it would affect how gravity acts on small scales and "slow down the evaporation quite substantially". In essence, the energy stored in the fourth spatial dimension as a stationary wave would bestow a significant rest mass to the object when regarded in the conventional four-dimensional space-time. This could mean there are several thousand primordial black holes in our galaxy. To test this theory, scientists will use the Fermi Gamma-ray Space Telescope which was put in orbit by NASA on June 11, 2008. If they observe specific small interference patterns within gamma-ray bursts, it could be the first indirect evidence for primordial black holes and string theory.

Observational limits and detection strategies

A variety of observations have been interpreted to place limits on the abundance and mass of primordial black holes:

Lifetime, Hawking radiation and gamma-rays: One way to detect primordial black holes, or to constrain their mass and abundance, is by their Hawking radiation. Stephen Hawking theorized in 1974 that large numbers of such smaller primordial black holes might exist in the Milky Way in our galaxy's halo region. All black holes are theorized to emit Hawking radiation at a rate inversely proportional to their mass. Since this emission further decreases their mass, black holes with very small mass would experience runaway evaporation, creating a burst of radiation at the final phase, equivalent to a hydrogen bomb yielding millions of megatons of explosive force. A regular black hole (of about 3 solar masses) cannot lose all of its mass within the current age of the universe (they would take about 1069 years to do so, even without any matter falling in). However, since primordial black holes are not formed by stellar core collapse, they may be of any size. A black hole with a mass of about 1011 kg would have a lifetime about equal to the age of the universe. If such low-mass black holes were created in sufficient number in the Big Bang, we should be able to observe explosions by some of those that are relatively nearby in our own Milky Way galaxy. NASA's Fermi Gamma-ray Space Telescope satellite, launched in June 2008, was designed in part to search for such evaporating primordial black holes. Fermi data set up the limit that less than one percent of dark matter could be made of primordial black holes with masses up to 1013 kg. Evaporating primordial black holes would have also had an impact on the Big Bang nucleosynthesis and change the abundances of light elements in the Universe. However, if theoretical Hawking radiation does not actually exist, such primordial black holes would be extremely difficult, if not impossible, to detect in space due to their small size and lack of large gravitational influence.

Temperature anisotropies in the cosmic microwave background: Accretion of matter onto primordial black holes in the early Universe should lead to energy injection in the medium that affects the recombination history of the Universe. This effect induces signatures in the statistical distribution of the cosmic microwave background (CMB) anisotropies. The Planck observations of the CMB exclude that primordial black holes with masses in the range 100–104 solar masses contribute importantly to the dark matter, at least in the simplest conservative model. It is still debated whether the constraints are stronger or weaker in more realistic or complex scenarios.

Gamma-ray signatures from annihilating dark matter: If the dark matter in the Universe is in the form of weakly interacting massive particles or WIMPs, primordial black holes would accrete a halo of WIMPs around them in the early universe. The annihilation of WIMPs in the halo leads to a signal in the gamma-ray spectrum which is potentially detectable by dedicated instruments such as the Fermi Gamma-ray Space Telescope.

In the future, new limits will be set up by various observations:

  • The Square Kilometre Array (SKA) radio telescope will probe the effects of primordial black holes on the reionization history of the Universe, due to energy injection into the intergalactic medium, induced by matter accretion onto primordial black holes.
  • LIGO, VIRGO and future gravitational waves detectors will detect new black hole merging events, from which one could reconstruct the mass distribution of primordial black holes. These detectors could allow distinguishing unambiguously between primordial or stellar origins if merging events involving black holes with a mass lower than 1.4 solar mass are detected. Another way would be to measure the large orbital eccentricity of primordial black hole binaries.
  • Gravitational wave detectors, such as the Laser Interferometer Space Antenna (LISA) and pulsar timing arrays, will also probe the stochastic background of gravitational waves emitted by primordial black hole binaries when they are still orbiting relatively far from each other.
  • New detections of faint dwarf galaxies, and the observations of their central star cluster, could be used to test the hypothesis that these dark matter-dominated structures contain primordial black holes in abundance.
  • Monitoring star positions and velocities within the Milky Way could be used to detect the influence of a nearby primordial black hole.
  • It has been suggested that a small black hole passing through the Earth would produce a detectable acoustic signal. Because of its tiny diameter, large mass compared to a nucleon, and relatively high speed, such primordial black holes would simply transit Earth virtually unimpeded with only a few impacts on nucleons, exiting the planet with no ill effects.
  • Another way to detect primordial black holes could be by watching for ripples on the surfaces of stars. If the black hole passed through a star, its density would cause observable vibrations.
  • Monitoring quasars in the microwave wavelength and detection of the wave optics feature of gravitational microlensing by the primordial black holes.
  • The other observational consequence would be the collision of primordial black holes with Earth and being trapped inside it that is studied by S. Rahvar in 2021. Considering the whole dark matter is made of primordial blackholes with the mass function allowed in the observational range, the probability of this event is much smaller than the age of the Universe.

Facilities able to provide PBH measurement

None of these facilities are focused on the direct detection of PBH due to them being a theoretical phenomenon, but the information collected in each respective experiment provides secondary data which can help provide insight and constraints on the nature of PBHs.s

GW-detectors

  • LIGO/VIRGO- These detectors already place important constraints on the limits of PBH’s. However, they’re always in the search for new unexpected signals; if they detect a black hole in the mass range that does not correspond to stellar evolution theory, it could serve as evidence for PBH’s.
  • Cosmic Explorer/Einstein Telescope- Both of these projects serve as the next generation of LIGO/VIRGO, these would increase sensitivity around the 10-100 Hz band and would allow to probe PBH information at higher redshifts
  • NANOGrav-This collaboration detected a stochastic signal but it is not yet a certified gravitational wave signal since quadrupolar correlations have not been detected. But, should this be confirmed, it could serve as evidence for sub-solar mass PBH’s.
  • Laser Interferometer Space Antenna(LISA)- Like any GW detector, LISA has great potential to detect PBH’s. The uniqueness of LISA lies with the ability to detect extreme mass ratio inspirals  when low mass black holes merge with massive objects. Due to its sensitivity it will also allow for the detection and confirmation of the stochastic NANOGrav signal.
  • AEDGE Atomic Experiment for Dark Matter and Gravity Exploration in Space- This proposed mid-range gravitational wave experiment has a uniqueness which lies in its detection ability of intermediate mass ratio mergers like the ones theorized during early supermassive black hole assembly, should the detection of these happen it would serve as evidence for PBH’s.

Space telescopes

  • Nancy Grace Roman Space Telescope (WFIRST)- As a space telescope, WFIRST will have the capacity of detecting or at least placing constraints on PBH’s through different types of lensing, one of which is Astrometric Lensing. When an object passes in front of a known light source, such as a star, it slightly (to the order of microarcseconds) shifts its position and this is known as Astrometric lensing.

Sky Surveys

  • Vera C. Rubin Observatory (LSST)- This will provide the capability of directly measuring the mass function of compact objects by microlensing. It will be able to observe both low and high-mass objects thus placing constraints on both sides of the spectrum. LSST will also have the ability to detect Kilonovae that lack gravitational wave signals which is related to the existence of PBH’s.

Very Large Arrays

  • ngVLA- the next generation Very Large Array will be able to improve GW bounds by a magnitude of the current contraints placed by the NANOGrav. This increased sensitivity will be able to confirm the nature of the GW signal from NANOGrav. It will also be able to discriminate a PBH explanation from other sources.

Fast Radio Bursts observatories

MeV Gamma-Ray Telescopes

  • Since the MeV gamma-ray band has yet to be explored, proposed experiments could place tighter constraints on the abundance of PBH’s in the asteroid-mass range. Some examples of the proposed telescopes include:
    • AdEPT
    • AMEGO
    • All-Sky ASTROGAM
    • GECCO
    • GRAMS
    • MAST
    • PANGU

GeV and TeV Gamma-Ray Observatories

Difference from direct collapse black holes

A direct collapse black hole is the result of the collapse of unusually dense and large regions of gas, after the radiation-dominated era, while primordial black holes result from the direct collapse of energy, ionized matter, or both, during the inflationary or radiation-dominated eras.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...