Search This Blog

Wednesday, August 16, 2023

Two-body problem

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Two-body_problem
 
Top: Two bodies with similar mass orbiting a common barycenter external to both bodies, with elliptic orbits—typical of binary stars . Bottom: Two bodies with a "slight" difference in mass orbiting a common barycenter. The sizes, and this type of orbit are similar to the Pluto–Charon system (in which the barycenter is external to both bodies), and to the EarthMoon system—where the barycenter is internal to the larger body.

In classical mechanics, the two-body problem is to predict the motion of two massive objects which are abstractly viewed as point particles. The problem assumes that the two objects interact only with one another; the only force affecting each object arises from the other one, and all other objects are ignored.

The most prominent case of the classical two-body problem is the gravitational case (see also Kepler problem), arising in astronomy for predicting the orbits (or escapes from orbit) of objects such as satellites, planets, and stars. A two-point-particle model of such a system nearly always describes its behavior well enough to provide useful insights and predictions.

A simpler "one body" model, the "central-force problem", treats one object as the immobile source of a force acting on the other. One then seeks to predict the motion of the single remaining mobile object. Such an approximation can give useful results when one object is much more massive than the other (as with a light planet orbiting a heavy star, where the star can be treated as essentially stationary).

However, the one-body approximation is usually unnecessary except as a stepping stone. For many forces, including gravitational ones, the general version of the two-body problem can be reduced to a pair of one-body problems, allowing it to be solved completely, and giving a solution simple enough to be used effectively.

By contrast, the three-body problem (and, more generally, the n-body problem for n ≥ 3) cannot be solved in terms of first integrals, except in special cases.

Results for prominent cases

Gravitation and other inverse-square examples

The two-body problem is interesting in astronomy because pairs of astronomical objects are often moving rapidly in arbitrary directions (so their motions become interesting), widely separated from one another (so they will not collide) and even more widely separated from other objects (so outside influences will be small enough to be ignored safely).

Under the force of gravity, each member of a pair of such objects will orbit their mutual center of mass in an elliptical pattern, unless they are moving fast enough to escape one another entirely, in which case their paths will diverge along other planar conic sections. If one object is very much heavier than the other, it will move far less than the other with reference to the shared center of mass. The mutual center of mass may even be inside the larger object.

For the derivation of the solutions to the problem, see Classical central-force problem or Kepler problem.

In principle, the same solutions apply to macroscopic problems involving objects interacting not only through gravity, but through any other attractive scalar force field obeying an inverse-square law, with electrostatic attraction being the obvious physical example. In practice, such problems rarely arise. Except perhaps in experimental apparatus or other specialized equipment, we rarely encounter electrostatically interacting objects which are moving fast enough, and in such a direction, as to avoid colliding, and/or which are isolated enough from their surroundings.

The dynamical system of a two-body system under the influence of torque turns out to be a Sturm-Liouville equation.

Inapplicability to atoms and subatomic particles

Although the two-body model treats the objects as point particles, classical mechanics only apply to systems of macroscopic scale. Most behavior of subatomic particles cannot be predicted under the classical assumptions underlying this article or using the mathematics here.

Electrons in an atom are sometimes described as "orbiting" its nucleus, following an early conjecture of Niels Bohr (this is the source of the term "orbital"). However, electrons don't actually orbit nuclei in any meaningful sense, and quantum mechanics are necessary for any useful understanding of the electron's real behavior. Solving the classical two-body problem for an electron orbiting an atomic nucleus is misleading and does not produce many useful insights.

Reduction to two independent, one-body problems

The complete two-body problem can be solved by re-formulating it as two one-body problems: a trivial one and one that involves solving for the motion of one particle in an external potential. Since many one-body problems can be solved exactly, the corresponding two-body problem can also be solved.

Jacobi coordinates for two-body problem; Jacobi coordinates are and with .

Let x1 and x2 be the vector positions of the two bodies, and m1 and m2 be their masses. The goal is to determine the trajectories x1(t) and x2(t) for all times t, given the initial positions x1(t = 0) and x2(t = 0) and the initial velocities v1(t = 0) and v2(t = 0).

When applied to the two masses, Newton's second law states that

 

 

 

 

(Equation 1)

 

 

 

 

(Equation 2)

where F12 is the force on mass 1 due to its interactions with mass 2, and F21 is the force on mass 2 due to its interactions with mass 1. The two dots on top of the x position vectors denote their second derivative with respect to time, or their acceleration vectors.

Adding and subtracting these two equations decouples them into two one-body problems, which can be solved independently. Adding equations (1) and (2) results in an equation describing the center of mass (barycenter) motion. By contrast, subtracting equation (2) from equation (1) results in an equation that describes how the vector r = x1x2 between the masses changes with time. The solutions of these independent one-body problems can be combined to obtain the solutions for the trajectories x1(t) and x2(t).

Center of mass motion (1st one-body problem)

Let be the position of the center of mass (barycenter) of the system. Addition of the force equations (1) and (2) yields

where we have used Newton's third law F12 = −F21 and where

The resulting equation:

shows that the velocity of the center of mass is constant, from which follows that the total momentum m1 v1 + m2 v2 is also constant (conservation of momentum). Hence, the position R(t) of the center of mass can be determined at all times from the initial positions and velocities.

Displacement vector motion (2nd one-body problem)

Dividing both force equations by the respective masses, subtracting the second equation from the first, and rearranging gives the equation

where we have again used Newton's third law F12 = −F21 and where r is the displacement vector from mass 2 to mass 1, as defined above.

The force between the two objects, which originates in the two objects, should only be a function of their separation r and not of their absolute positions x1 and x2; otherwise, there would not be translational symmetry, and the laws of physics would have to change from place to place. The subtracted equation can therefore be written:

where is the reduced mass

Solving the equation for r(t) is the key to the two-body problem. The solution depends on the specific force between the bodies, which is defined by . For the case where follows an inverse-square law, see the Kepler problem.

Once R(t) and r(t) have been determined, the original trajectories may be obtained

as may be verified by substituting the definitions of R and r into the right-hand sides of these two equations.

Two-body motion is planar

The motion of two bodies with respect to each other always lies in a plane (in the center of mass frame).

Proof: Defining the linear momentum p and the angular momentum L of the system, with respect to the center of mass, by the equations

where μ is the reduced mass and r is the relative position r2r1 (with these written taking the center of mass as the origin, and thus both parallel to r) the rate of change of the angular momentum L equals the net torque N

and using the property of the vector cross product that v × w = 0 for any vectors v and w pointing in the same direction,

with F = μd2r/dt2.

Introducing the assumption (true of most physical forces, as they obey Newton's strong third law of motion) that the force between two particles acts along the line between their positions, it follows that r × F = 0 and the angular momentum vector L is constant (conserved). Therefore, the displacement vector r and its velocity v are always in the plane perpendicular to the constant vector L.

Energy of the two-body system

If the force F(r) is conservative then the system has a potential energy U(r), so the total energy can be written as

In the center of mass frame the kinetic energy is the lowest and the total energy becomes

The coordinates x1 and x2 can be expressed as
and in a similar way the energy E is related to the energies E1 and E2 that separately contain the kinetic energy of each body:

Central forces

For many physical problems, the force F(r) is a central force, i.e., it is of the form

where r = |r| and = r/r is the corresponding unit vector. We now have:
where F(r) is negative in the case of an attractive force.

Prehistory of nakedness and clothing

Of the many characteristics of humans, nakedness and clothing are highly related. The loss of body hair distinguishes humans from other primates. The current evidence indicates that anatomically modern humans were naked in prehistory for at least 90,000 years before the invention of clothing. Today, isolated Indigenous peoples in tropical climates continue to be without clothing in many everyday activities.

Evolution of hairlessness

Humans' closest living relatives have both extensive areas of fur and also bare patches

The general hairlessness of humans in comparison to related species may be due to loss of functionality in the pseudogene KRTHAP1 (which helps produce keratin) in the human lineage about 240,000 years ago. On an individual basis, mutations in the gene HR can lead to complete hair loss, though this is not typical in humans. Humans may also lose their hair as a result of hormonal imbalance due to drugs or pregnancy.

In order to comprehend why humans have significantly less body hair than other primates, one must understand that mammalian body hair is not merely an aesthetic characteristic; it protects the skin from wounds, bites, heat, cold, and UV radiation. Additionally, it can be used as a communication tool and as a camouflage.

The first member of the genus homo to be hairless was Homo erectus, originating about 1.6 million years ago. The dissipation of body heat remains the most widely accepted evolutionary explanation for the loss of body hair in early members of the genus homo, the surviving member of which is modern humans. Less hair, and an increase in sweat glands, made it easier for their bodies to cool when they moved from living in shady forest to open savanna. This change in environment also resulted in a change in diet, from largely vegetarian to hunting. Pursuing game on the savanna also increased the need for regulation of body heat.

Anthropologist and paleo-biologist Nina Jablonski posits that the ability to dissipate excess body heat through eccrine sweating helped make possible the dramatic enlargement of the brain, the most temperature-sensitive human organ. Thus the loss of fur was also a factor in further adaptations, both physical and behavioral, that differentiated humans from other primates. Some of these changes are thought to be the result of sexual selection. By selecting more hairless mates, humans accelerated changes initiated by natural selection. Sexual selection may also account for the remaining human hair in the pubic area and armpits, which are sites for pheromones, while hair on the head continued to provide protection from the sun. Anatomically modern humans, whose traits include hairlessness, evolved 260,000 to 350,000 years ago.

Phenotypic changes

Humans are the only primate species that have undergone significant hair loss and of the approximately 5000 extant species of mammal, only a handful are effectively hairless. This list includes elephants, rhinoceroses, hippopotamuses, walruses, some species of pigs, whales and other cetaceans, and naked mole rats. Most mammals have light skin that is covered by fur, and biologists believe that early human ancestors started out this way also. Dark skin probably evolved after humans lost their body fur, because the naked skin was vulnerable to the strong UV radiation as explained in the Out of Africa hypothesis. Therefore, evidence of the time when human skin darkened has been used to date the loss of human body hair, assuming that the dark skin was needed after the fur was gone.

With the loss of fur, darker, high-melanin skin evolved as a protection from ultraviolet radiation damage. As humans migrated outside of the tropics, varying degrees of depigmentation evolved in order to permit UVB-induced synthesis of previtamin D3. The relative lightness of female compared to male skin in a given population may be due to the greater need for women to produce more vitamin D during lactation.

The sweat glands in humans could have evolved to spread from the hands and feet as the body hair changed, or the hair change could have occurred to facilitate sweating. Horses and humans are two of the few animals capable of sweating on most of their body, yet horses are larger and still have fully developed fur. In humans, the skin hairs lie flat in hot conditions, as the arrector pili muscles relax, preventing heat from being trapped by a layer of still air between the hairs, and increasing heat loss by convection.

Sexual selection hypothesis

Another hypothesis for the thick body hair on humans proposes that Fisherian runaway sexual selection played a role (as well as in the selection of long head hair), (see terminal and vellus hair), as well as a much larger role of testosterone in men. Sexual selection is the only theory thus far that explains the sexual dimorphism seen in the hair patterns of men and women. On average, men have more body hair than women. Males have more terminal hair, especially on the face, chest, abdomen, and back, and females have more vellus hair, which is less visible. The halting of hair development at a juvenile stage, vellus hair, would also be consistent with the neoteny evident in humans, especially in females, and thus they could have occurred at the same time. This theory, however, has significant holdings in today's cultural norms. There is no evidence that sexual selection would proceed to such a drastic extent over a million years ago when a full, lush coat of hair would most likely indicate health and would therefore be more likely to be selected for, not against.

Water-dwelling hypothesis

The aquatic ape hypothesis (AAH) includes hair loss as one of several characteristics of modern humans that could indicate adaptations to an aquatic environment. Serious consideration may be given by contemporary anthropologists to some hypotheses related to AAH, but hair loss is not one of them.

Parasite hypothesis

A divergent explanation of humans' relative hairlessness holds that ectoparasites (such as ticks) residing in fur became problematic as humans became hunters living in larger groups with a "home base". Nakedness would also make the lack of parasites apparent to prospective mates. However, this theory is inconsistent with the abundance of parasites that continue to exist in the remaining patches of human hair.

The "ectoparasite" explanation of modern human nakedness is based on the principle that a hairless primate would harbor fewer parasites. When our ancestors adopted group-dwelling social arrangements roughly 1.8 mya, ectoparasite loads increased dramatically. Early humans became the only one of the 193 primate species to have fleas, which can be attributed to the close living arrangements of large groups of individuals. While primate species have communal sleeping arrangements, these groups are always on the move and thus are less likely to harbor ectoparasites.

It was expected that dating the split of the ancestral human louse into two species, the head louse and the pubic louse, would date the loss of body hair in human ancestors. However, it turned out that the human pubic louse does not descend from the ancestral human louse, but from the gorilla louse, diverging 3.3 million years ago. This suggests that humans had lost body hair (but retained head hair) and developed thick pubic hair prior to this date, were living in or close to the forest where gorillas lived, and acquired pubic lice from butchering gorillas or sleeping in their nests. The evolution of the body louse from the head louse, on the other hand, places the date of clothing much later, some 100,000 years ago.

Fire hypothesis

Another hypothesis is that humans' use of fire caused or initiated the reduction in human hair.

Childrearing hypothesis

Another view is proposed by James Giles, who attempts to explain hairlessness as evolved from the relationship between mother and child, and as a consequence of bipedalism. Giles also connects romantic love to hairlessness.

The last common ancestor of humans and chimpanzees was only partially bipedal, often using their front legs for locomotion. Other primate mothers do not need to carry their young because there is fur for them to cling to, but the loss of fur encouraged full bipedalism, allowing the mothers to carry their babies with one or both hands. The combination of hairlessness and upright posture may also explain the enlargement of the female breasts as a sexual signal. Another theory is that the loss of fur also promoted mother-child attachment based upon the pleasure of skin-to-skin contact. This may explain the more extensive hairlessness of female humans compared to males. Nakedness also affects sexual relationships as well, the duration of human intercourse being many times the duration of any other primates.

Origin of clothing

A necklace reconstructed from perforated sea snail shells from Upper Palaeolithic Europe, dated between 39,000 and 25,000 BCE. The practice of body adornment is associated with the emergence of behavioral modernity.

The current empirical evidence for the origin of clothing is from a 2010 study published in Molecular Biology and Evolution. That study indicates that the habitual wearing of clothing began at some point in time between 83,000 years ago and 170,000 years ago based upon a genetic analysis indicating when clothing lice diverged from their head louse ancestors. This information suggests that the use of clothing likely originated with anatomically modern humans in Africa prior to their migration to colder climates, allowing them to do so.

Some of the technology for what is now called clothing may have originated to make other types of adornment, including jewelry, body paint, tattoos, and other body modifications, "dressing" the naked body without concealing it. According to Mark Leary and Nicole R. Buttermore, body adornment is one of the changes that occurred in the late Paleolithic (40,000 to 60,000 years ago) in which humans became not only anatomically modern, but also behaviorally modern and capable of self-reflection and symbolic interaction. More recent studies place the use of adornment at 77,000 years ago in South Africa, and 90,000—100,000 years ago in Israel and Algeria. While modesty may be a factor, often overlooked purposes for body coverings are camouflage used by hunters, body armor, and costumes used to impersonate "spirit-beings".

The origin of complex, fitted clothing required the invention of fine stone knives for cutting skins into pieces, and the eyed needle for sewing. This was done by Cro-Magnons, who migrated to Europe around 35,000 years ago. The Neanderthal occupied the same region, but became extinct in part because they could not make fitted garments, but draped themselves with crudely cut skins—based upon their simple stone tools—which did not provide the warmth needed to survive as the climate grew colder in the Last Glacial Period. In addition to being less functional, the simple wrappings would not have been habitually worn by Neanderthal due to their being more cold-tolerant than Homo sapiens, and would not have acquired the secondary functions of decoration and promoting modesty.

The earliest archeological evidence of fabric clothing is inferred from representations in figurines in the southern Levant dated between 11,700 and 10,500 years ago. The surviving examples of woven cloth are linen from Egypt dated 5,000 BCE, although knotted or twisted flax fibers have been found as early as 7000 BCE.

While adults are rarely completely naked in modern societies, covering at least their genitals, adornments and clothing often emphasize, enhance, or otherwise call attention to the sexuality of the body.

Surface modification of biomaterials with proteins


Protein patterning – chessboard pattern

Biomaterials are materials that are used in contact with biological systems. Biocompatibility and applicability of surface modification with current uses of metallic, polymeric and ceramic biomaterials allow alteration of properties to enhance performance in a biological environment while retaining bulk properties of the desired device.

Surface modification involves the fundamentals of physicochemical interactions between the biomaterial and the physiological environment at the molecular, cellular and tissue levels (reduce bacterial adhesion, promote cell adhesion). Currently, there are various methods of characterization and surface modification of biomaterials and useful applications of fundamental concepts in several biomedical solutions.

Function

The function of surface modification is to change the physical and chemical properties of surfaces to improve the functionality of the original material. Protein surface modification of various types biomaterials (ceramics, polymers, metals, composites) is performed to ultimately increase biocompatibility of the material and interact as a bioactive material for specific applications. In various biomedical applications of developing implantable medical devices (such as pacemakers and stents), surface properties/interactions of proteins with a specific material must be evaluated with regards to biocompatibility as it plays a major role in determining a biological response. For instance, surface hydrophobicity or hydrophilicity of a material can be altered. Engineering biocompatibility between the physiological environment and the surface material allows new medical products, materials and surgical procedures with additional biofunctionality.

Surface modification can be done through various methods, which can be classified through three main groups: physical (physical adsorption, Langmuir blodgett film), chemical (oxidation by strong acids, ozone treatment, chemisorption, and flame treatment) and radiation (glow discharge, corona discharge, photo activation (UV), laser, ion beam, plasma immersion ion implantation, electron beam lithography, and γ-irradiation).

Biocompatibility

In a biomedical perspective, biocompatibility is the ability of a material to perform with an appropriate host response in a specific application. It is described to be non-toxic, no induced adverse reactions such as chronic inflammatory response with unusual tissue formation, and designed to function properly for a reasonable lifetime. It is a requirement of biomaterials in which the surface modified material will cause no harm to the host, and the material itself will not harmed by the host. Although most synthetic biomaterials have the physical properties that meet or even exceed those of natural tissue, they often result in an unfavorable physiological reaction such as thrombosis formation, inflammation and infection.

Biointegration is the ultimate goal in for example orthopedic implants that bones establish a mechanically solid interface with complete fusion between the artificial implanted material and bone tissues under good biocompatibility conditions. Modifying the surface of a material can improve its biocompatibility, and can be done without changing its bulk properties. The properties of the uppermost molecular layers are critical in biomaterials since the surface layers are in physicochemical contact with the biological environment.

Furthermore, although some of the biomaterials have good biocompatibility, it may possess poor mechanical or physical properties such as wear resistance, anti-corrosion, or wettability or lubricity. In these cases, surface modification is utilized to deposit a layer of coating or mixing with substrate to form a composite layer.

Cell adhesion

As proteins are made up of different sequences of amino acids, proteins can have various functions as its structural shape driven by a number of molecular bonds can change. Amino acids exhibit different characteristics such as being polar, non-polar, positively or negatively charged which is determined by having different side chains. Thus, attachment of molecules with different protein for example, those containing Arginine-Glycine-Aspartate (RGD) sequences are expected to modify the surface of tissue scaffolds and result in improvement of cell adhesion when placed into its physiological environment. Additional modifications of the surface could be through attachment of functional groups of 2D or 3D patterns on the surface so that cell alignment is guided and new tissue formation is improved.

Biomedical materials

Some of the surface modification techniques listed above are particularly used for certain functions or kinds of materials. One of the advantages of plasma immersion ion implantation is its ability to treat most materials. Ion implantation is an effective surface treatment technique that be used to enhance the surface properties of biomaterials. The unique advantage of plasma modification is that the surface properties and biocompatibility can be enhanced selectively while the favorable bulk attributes of the materials such as strength remain unchanged. Overall, it is an effective method to modify medical implants with complex shape. By altering the surface functionalities using plasma modification, the optimal surface, chemical and physical properties can be obtained.

Plasma immersion implantation is a technique suitable for low melting point materials such as polymers, and widely accepted to improve adhesion between pinhole free layers and substrates. The ultimate goal is to enhance the properties of biomaterials such as biocompatibility, corrosion resistance and functionality with the fabrication of different types of biomedical thin films with various biologically important elements such as nitrogen, calcium, and sodium implanted with them. Different thin films such as titanium oxide, titanium nitride, and diamond-like carbon have been treated previously, and results show that the processed material exhibit better biocompatibility compared to the some current ones used in biomedical implants. In order to evaluate the biocompatibility of the fabricated thin films, various in vitro biological environment need to be conducted.

Biological response

The immune system will react differently if an implant is coated in extra-cellular matrix proteins. The proteins surrounding the implant serve to "hide" the implant from the innate immune system. However, if the implant is coated in allergenic proteins, the patient's adaptive immune response may be initiated. To prevent such a negative immune reaction, immunosuppressive drugs may be prescribed, or autologous tissue may produce the protein coating.

Acute response

Immediately following insertion, an implant (and the tissue damage from surgery) will result in acute inflammation. The classic signs of acute inflammation are redness, swelling, heat, pain, and loss of function. Hemorrhaging from tissue damage results in clotting which stimulates latent mast cells. The mast cells release chemokines which activate blood vessel endothelium. The blood vessels dilate and become leaky, producing the redness and swelling associated with acute inflammation. The activated endothelium allows extravasation of blood plasma and white blood cells including macrophages which transmigrate to the implant and recognize it as non-biologic. Macrophages release oxidants to combat the foreign body. If antioxidants fail to destroy the foreign body, chronic inflammation begins.

Chronic response

Implantation of non-degradable materials will eventually result in chronic inflammation and fibrous capsule formation. Macrophages that fail to destroy pathogens will merge to form a foreign-body giant cell which quarantines the implant. High levels of oxidants cause fibroblasts to secrete collagen, forming a layer of fibrous tissue around the implant.

By coating an implant with extracellular matrix proteins, macrophages will be unable to recognize the implant as non-biologic. The implant is then capable of continued interaction with the host, influencing the surrounding tissue toward various outcomes. For instance, the implant may improve healing by secreting angiogenic drugs.

Fabrication techniques

Physical modification

Physical immobilization is simply coating a material with a biomimetic material without changing the structure of either. Various biomimetic materials with cell adhesive proteins (such as collagen or laminin) have been used in vitro to direct new tissue formation and cell growth. Cell adhesion and proliferation occurs much better on protein-coated surfaces. However, since the proteins are generally isolated, it is more likely to elicit an immune response. Generally, chemistry qualities should be taken into consideration.

Chemical modification

Covalent binding of protein with polymer graft

Alkali hydrolysis, covalent immobilization, and the wet chemical method are only three of the many ways to chemically modify a surface. The surface is prepped with surface activation, where several functionalities are placed on the polymer to react better with the proteins. In alkali hydrolysis, small protons diffuse between polymer chains and cause surface hydrolysis which cleaves ester bonds. This results in the formation of carboxyl and hydroxyl functionalities which can attach to proteins. In covalent immobilization, small fragments of proteins or short peptides are bonded to the surface. The peptides are highly stable and studies have shown that this method improves biocompatibility. The wet chemical method is one of the preferred methods of protein immobilization. Chemical species are dissolved in an organic solution where reactions take place to reduce the hydrophobic nature of the polymer. Surface stability is higher in chemical modification than in physical adsorption. It also offers higher biocompatibility towards cell growth and bodily fluid flow.

Photochemical modification

Cell adhesion for various functional groups. OH and CONH2 improve surface wetting compared with COOH

Successful attempts at grafting biomolecules onto polymers have been made using photochemical modification of biomaterials. These techniques employ high energy photons (typically UV) to break chemical bonds and release free radicals. Protein adhesion can be encouraged by favorably altering the surface charge of a biomaterial. Improved protein adhesion leads to better integration between the host and the implant. Ma et al. compared cell adhesion for various surface groups and found that OH and CONH2 improved PLLA wettability more than COOH.

Applying a mask to the surface of the biomaterial allows selective surface modification. Areas that UV light penetrate will be modified such that cells will adhere to the region more favorably.

The minimum feature size attainable is given by:

where

is the minimum feature size

(commonly called k1 factor) is a coefficient that encapsulates process-related factors, and typically equals 0.4 for production.

is the wavelength of light used

is the numerical aperture of the lens as seen from the wafer

According to this equation, greater resolution can be obtained by decreasing the wavelength, and increasing the numerical aperture.

Composites and graft formation

Graft formation improves the overall hydrophilicity of the material through a ratio of how much glycolic acid and lactic acid is added. Block polymer, or PLGA, decreases hydrophobicity of the surface by controlling the amount of glycolic acid. However, this doesn't increase the hydrophilic tendency of the material. In brush grafting, hydrophilic polymers containing alcohol or hydroxyl groups are placed onto surfaces through photopolymerization.

Plasma treatment

Plasma techniques are especially useful because they can deposit ultra thin (a few nm), adherent, conformal coatings. Glow discharge plasma is created by filling a vacuum with a low-pressure gas (ex. argon, ammonia, or oxygen). The gas is then excited using microwaves or current which ionizes it. The ionized gas is then thrown onto a surface at a high velocity where the energy produced physically and chemically changes the surface. After the changes occur, the ionized plasma gas is able to react with the surface to make it ready for protein adhesion. However, the surfaces may lose mechanical strength or other inherent properties because of the high amounts of energy.

Several plasma-based technologies have been developed to contently immobilize proteins depending on the final application of the resulting biomaterial. This technique is a relatively fast approach to produce smart bioactive surfaces.

Applications

Bone tissue

Extra-cellular matrix (ECM) proteins greatly dictate the process of bone formation—the attachment and proliferation of osteogenitor cells, differentiation to osteoblasts, matrix formation, and mineralization. It is beneficial to design biomaterials for bone-contacting devices with bone matrix proteins to promote bone growth. It is also possible to covalently and directionally immobilize osteoinductive peptides in the surface of the ceramic materials such as hydroxyapatite/β-tricalcium phosphate to stimulate osteoblast differentiation and better bone regeneration. RGD peptides have been shown to increase the attachment and migration of osteoblasts on titanium implants, polymeric materials, and glass. Other adhesive peptides that can be recognized by molecules in the cell membrane can also affect binding of bone-derived cells. Particularly, the heparin binding domain in fibronectin is actively involved in specific interaction with osteogenic cells. Modification with heparin binding domains have the potential to enhance the binding of osteoblasts without affecting the attachment of endothelial cells and fibroblasts. Additionally, growth factors such as those in the bone morphogenic protein family are important polypeptides to induce bone formation. These growth factors can be covalently bound to materials to enhance the osteointegration of implants.

Neural tissue

Peripheral nervous system damage is typically treated by an autograft of nerve tissue to bridge a severed gap. This treatment requires successful regeneration of neural tissue; axons must grow from the proximal stump without interference in order to make a connection with the distal stump. Neural guidance channels (NGC), have been designed as a conduit for growth of new axons and the differentiation and morphogenesis of these tissues is affected by interaction between neural cells and the surrounding ECM. Studies of laminin have shown the protein to be an important ECM protein in the attachment of neural cells. The penta-peptide YIGSR and IKVAV, which are important sequences in laminin, have been shown to increase attachment of neural cells with the ability to control the spatial organization of the cells.

Cardiovascular tissue

It is important that cardiovascular devices such as stents or artificial vascular grafts be designed to mimic properties of the specific tissue region the device is serving to replace. In order to reduce thrombogenicity, surfaces can be coated with fibronectin and RGD containing peptides, which encourages attachment of endothelial cells. The peptides YIGSR and REDV have also been shown to enhance attachment and spreading of endothelial cells and ultimately reduce the thrombogenicity of the implant.

Surface protein sequence Function
RGD Promotes cell adhesion
Osteopontin-1 Improves mineralization by osteoblasts
Laminin Promotes neurite outgrowth
GVPGI Improves mechanical stability of vascular grafts
REDV Enhances endothelial cell adhesion
YIGSR Promotes neural and endothelial cell attachment
PHPMA-RGD Promotes axonal outgrowth
IKVAV Promotes neural cell attachment
KQAGDVA Promotes smooth muscle cell adhesion
VIPGIG Enhances elastic modulus of artificial ECM
FKRRIKA Improves mineralization by osteoblasts
KRSR Promotes osteoblast adhesion
MEPE Promotes osteoblast differentiation

Big Crunch

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Big_Crunch
An animation of the expected behavior of a Big Crunch

The Big Crunch is a hypothetical scenario for the ultimate fate of the universe, in which the expansion of the universe eventually reverses and the universe recollapses, ultimately causing the cosmic scale factor to reach zero, an event potentially followed by a reformation of the universe starting with another Big Bang. The vast majority of evidence indicates that this hypothesis is not correct. Instead, astronomical observations show that the expansion of the universe is accelerating rather than being slowed by gravity, suggesting that the universe is far more likely to end in heat death.

The theory dates back to 1922, with Russian physicist Alexander Friedmann creating a set of equations showing that the end of the universe depends on its density. It could either expand or contract rather than stay stable. With enough matter, gravity could stop the universe's expansion and eventually reverse it. This reversal would result in the universe collapsing on itself, not too dissimilar to a black hole.

The outcome of the universe can be seen by seeing which force will beat out the other; one is the explosive force from the Big Bang, and the other is gravity. If gravity overcomes the force of the Big Bang, then the Big Crunch will start, reversing the Big Bang. However if this doesn't happen, heat death is the most likely scenario. While astronomers know that the universe is expanding, there is no consensus or data on how large the force of expansion actually is.

The ending of the Big Crunch would get filled with radiation from stars and high-energy particles; when this is condensed and blueshifted to higher energy, it would be intense enough to ignite the surface of stars before they collide. In the final moments, the universe would be one large fireball with a temperature of infinity, and at the absolute end, neither time, nor space would remain.

Overview

The Big Crunch scenario hypothesized that the density of matter throughout the universe is sufficiently high that gravitational attraction will overcome the expansion which began with the Big Bang. The FLRW cosmology can predict whether the expansion will eventually stop based on the average energy density, Hubble parameter, and cosmological constant. If the metric expansion stopped, then contraction will inevitably follow, accelerating as time passes and finishing the universe in a kind of gravitational collapse, turning the universe into a black hole.

Experimental evidence in the late 1990s and early 2000s (namely the observation of distant supernovas as standard candles; and the well-resolved mapping of the cosmic microwave background) led to the conclusion that the expansion of the universe is not getting slowed by gravity but is instead accelerating. The 2011 Nobel Prize in Physics was awarded to researchers who contributed to this discovery.

The Big Crunch theory also leads into another theory known as the Big Bounce, in which after the big crunch destroys the universe, it does a sort of bounce, causing another big bang. This could potentially repeat forever in a phenomenon known as a cyclic universe.

History

Richard Bentley, a churchman, and a scholar, in preparation for a lecture on Newton's theories and the rejection of atheism, sent a letter out to Sir Isaac Newton,

"If we're in a finite universe and all stars attract each other together, would they not all collapse to a singular point, and if we're in an infinite universe with infinite stars, would infinite forces in every direction not affect all of those stars?"

This question is known as Bentley's paradox, a proto-theory of the Big Crunch. Although, it is now known that stars move around and aren't static.

Newton's copy of Principia, the book that caused Richard Bentley to send Newton the letter.

Einstein's cosmological constant

Albert Einstein favored a completely unchanging model of the universe. He collaborated in 1917 with Dutch astronomer Willem de Sitter to help demonstrate that the theory of general relativity would work with a static model; Willem demonstrated that his equations could describe a very simple universe. Finding no problems initially, scientists adapted the model to describe the universe. However, they ran into a different form of Bentley's paradox.

The theory of general relativity also described the universe as restless, contradicting information he found. Einstein realized that for a static universe to exist—which was observed at the time—an anti-gravity would be needed to counter the gravity contracting the universe together, adding an extra force that would ruin the equations in the theory of relativity. In the end, the cosmological constant, the name for the anti-gravity force, was added to the theory of relativity.

Discovery of Hubble's law

Scatter plot that Hubble used to find the Hubble constant.

Edwin Hubble working in the Mount Wilson Observatory took measurements of the distances of galaxies and paired them with Vesto Silpher and Milton Humason's measurements of redshifts associated with said galaxies. He discovered a rough proportionality between the redshift of an object and its distance. Hubble plotted a trend line from 46 galaxies, studying and obtaining the Hubble Constant, which he deduced to be 500 km/s/Mpc, nearly seven times than what it is considered today, but still giving the proof that the universe was expanding and was not a static object.

Abandonment of the cosmological constant

After publishing Hubble's discovery, Einstein completely abandoned the cosmological constant. In their simplest form, the equations generated a model of the universe that expanded or contracted. Contradicting what was observed, hence the creation of the cosmological constant. After the confirmation that the universe was expanding, Einstein called his assumption that the universe was static his "biggest mistake." In 1931, Einstein visited Hubble to thank him for "providing the basis of modern cosmology."

After this discovery, Einstein's and Newton's models of a contracting, yet static universe, were dropped for the model expanding universe model.

Cyclic universes

A theory called "Big Bounce" proposes that the universe could collapse to the state where it began and then initiate another Big Bang, so in this way, the universe would last forever but would pass through phases of expansion (Big Bang) and contraction (Big Crunch). This means that there may be a universe in a state of constant Big Bangs and Big Crunches.

Cyclic universes were briefly considered by Albert Einstein in 1931. He theorized that there was a universe before the Big Bang, which ended in a Big Crunch, which could create a Big Bang as a reaction. Our universe could be in a cycle of expansion and contraction, a cycle possibly going on infinitely.

Ekpyrotic model

Photo of two branes, the basis of the Ekpyrotic model.

There are more modern theories of Cyclic universes as well. The Ekpyrotic theory, formed by Paul Steinhardt, states that the Big Bang could have been caused by two parallel orbifold planes, referred to as branes colliding in a higher-dimensional space. The four dimension universe lies on one of the branes. The collision corresponds to the Big Crunch, then a Big Bang. The matter and radiation around us today are quantum fluctuations from before the branes. After several billion years, the universe has reached its modern state, and it will start contracting in another several billion years. Dark Energy corresponds to the force between the branes, allowing for problems, like the flatness and monopole in the previous theories to be fixed. The cycles can also go infinitely into the past and the future, and an attractor allows for a complete history of the universe.

This fixes the problem of the earlier model of the universe going into heat death from entropy buildup. The new model avoids this with a net expansion after every cycle, stopping entropy buildup. However, there are still some flaws in this model. The basis of the theory, branes, are still not understood completely by string theorists, and the possibility that the scale invariant spectrum could be destroyed from the big crunch. While cosmic inflation and the general character of the forces—or the collision of the branes in the Ekpyrotic model—required to make vacuum fluctuations is known. A candidate from particle physics is missing.

Conformal Cyclic Cosmology (CCC) model

CMB
A map of the CMB showing different hot spots around the universe.

Physicist Roger Penrose advanced a general relativity-based theory called the conformal cyclic cosmology in which the universe expands until all the matter decays and is turned to light. Since nothing in the universe would have any time or distance scale associated with it, it becomes identical with the Big Bang (resulting in a type of Big Crunch which becomes the next Big Bang, thus starting the next cycle). Penrose and Gurzadyan suggested that signatures of conformal cyclic cosmology could potentially be found in the cosmic microwave background; as of 2020, these have not been detected.

There are also some flaws with this theory as well, skeptics pointed out that in order to match up an infinitely large universe to an infinitely small universe, that all particles must lose their mass when the universe gets old. However, Penrose presented evidence of CCC in the form of rings that had uniform temperature in the CMB, the idea being that these rings would be the signature in our aeon—An aeon being the current cycle of the universe that we're in—was caused by spherical gravitational waves caused by colliding black holes from our previous aeon.

Loop quantum cosmology (LQC)

Loop quantum cosmology is a model of the universe that proposes a "quantum-bridge" between expanding and contracting universes. In this model quantum geometry creates a brand new force negligible at low space-time curvature. However, rising very rapidly in the Planck regime, overwhelming classical gravity which resolves singularities of general relativity. Once the singularities are resolved the conceptual paradigm of cosmology changes, forcing one to revisit the standard issues—such as the horizon problem—from a new perspective.

Due to quantum geometry, the Big Bang is replaced by the Big Bounce with no assumptions or any fine tuning. An important feature of the theory being the space-time description of the underlying quantum evolution. The approach of effective dynamics have been used extensively in loop quantum cosmology to describe physics at the Planck-scale and also the beginning of the universe. Numerical simulations have confirmed the validity of effective dynamics, which provides a good approximation of the full loop quantum dynamics. It has been shown when states have very large quantum fluctuations at late times, meaning they do not lead to macroscopic universes as described by general relativity, but the effective dynamics departs from quantum dynamics near bounce and the later universe. In this case, the effective dynamics will overestimate the density at bounce, but it will still capture qualitative aspects extremely well.

Empirical scenarios from physical theories

If a form of quintessence driven by a scalar field evolving down a monotonically decreasing potential that passes sufficiently below zero is the (main) explanation of dark energy and current data (in particular observational constraints on dark energy) is true as well, the accelerating expansion of the Universe would inverse to contraction within the cosmic near-future of the next 100 million years. According to an Andrei-Ijjas-Steinhardt study, the scenario fits "naturally with cyclic cosmologies and recent conjectures about quantum gravity". The study suggests that the slow contraction phase would "endure for a period of order 1 billion y before the universe transitions to a new phase of expansion".

Effects

Paul Davies considered a scenario in which the Big Crunch happens about 100 billion years from the present. In his model, the contracting universe would evolve roughly like the expanding phase in reverse. First, galaxy clusters, and then galaxies, would merge, and the temperature of the cosmic microwave background (CMB) would begin to rise as CMB photons get blueshifted. Stars would eventually become so close together that they begin to collide with each other. Once the CMB becomes hotter than M-type stars (about 500,000 years before the Big Crunch in Davies' model), they would no longer be able to radiate away their heat and would cook themselves until they evaporate; this continues for successively hotter stars until O-type stars boil away about 100,000 years before the Big Crunch. In the last minutes, the temperature of the universe would be so great that atoms and atomic nuclei would break up and get sucked up into already coalescing black holes. At the time of the Big Crunch, all the matter in the universe would be crushed into an infinitely hot, infinitely dense singularity similar to the Big Bang. The Big Crunch may be followed by another Big Bang, creating a new universe.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...