Search This Blog

Sunday, April 16, 2017

Geological history of oxygen

From Wikipedia, the free encyclopedia
 
O2 build-up in the Earth's atmosphere. Red and green lines represent the range of the estimates while time is measured in billions of years ago (Ga).
Stage 1 (3.85–2.45 Ga): Practically no O2 in the atmosphere.
Stage 2 (2.45–1.85 Ga): O2 produced, but absorbed in oceans and seabed rock.
Stage 3 (1.85–0.85 Ga): O2 starts to gas out of the oceans, but is absorbed by land surfaces and formation of ozone layer.
Stages 4 and 5 (0.85 Ga–present): O2 sinks filled, the gas accumulates.[1]

Before photosynthesis evolved, Earth's atmosphere had no free oxygen (O2).[2] Photosynthetic prokaryotic organisms that produced O2 as a waste product lived long before the first build-up of free oxygen in the atmosphere,[3] perhaps as early as 3.5 billion years ago. The oxygen they produced would have been rapidly removed from the atmosphere by weathering of reducing minerals, most notably iron. This "mass rusting" led to the deposition of iron oxide on the ocean floor, forming banded iron formations. Oxygen only began to persist in the atmosphere in small quantities about 50 million years before the start of the Great Oxygenation Event.[4] This mass oxygenation of the atmosphere resulted in rapid buildup of free oxygen. At current rates of primary production, today's concentration of oxygen could be produced by photosynthetic organisms in 2,000 years.[5] In the absence of plants, the rate of oxygen production by photosynthesis was slower in the Precambrian, and the concentrations of O2 attained were less than 10% of today's and probably fluctuated greatly; oxygen may even have disappeared from the atmosphere again around 1.9 billion years ago.[6] These fluctuations in oxygen concentration had little direct effect on life,[citation needed] with mass extinctions not observed until the appearance of complex life around the start of the Cambrian period, 541 million years ago.[7] The presence of O
2
provided life with new opportunities. Aerobic metabolism is more efficient than anaerobic pathways, and the presence of oxygen undoubtedly created new possibilities for life to explore.[8]:214, 586[9] Since the start of the Cambrian period, atmospheric oxygen concentrations have fluctuated between 15% and 35% of atmospheric volume.[10] The maximum of 35% was reached towards the end of the Carboniferous period (about 300 million years ago), a peak which may have contributed to the large size of insects and amphibians at that time.[9] Whilst human activities, such as the burning of fossil fuels, affect relative carbon dioxide concentrations, their effect on the much larger concentration of oxygen is less significant.[11]

Effects on life

The concentration of oxygen in the atmosphere is often cited as a possible contributor to large-scale evolutionary phenomena, such as the origin of the multicellular Ediacara biota, the Cambrian explosion, trends in animal body size, and other extinction and diversification events.[9]

The large size of insects and amphibians in the Carboniferous period, when the oxygen concentration in the atmosphere reached 35%, has been attributed to the limiting role of diffusion in these organisms' metabolism.[citation needed] But Haldane's essay[12] points out that it would only apply to insects. However, the biological basis for this correlation is not firm, and many lines of evidence show that oxygen concentration is not size-limiting in modern insects.[9] There is no significant correlation between atmospheric oxygen and maximum body size elsewhere in the geological record.[9] Ecological constraints can better explain the diminutive size of post-Carboniferous dragonflies - for instance, the appearance of flying competitors such as pterosaurs, birds and bats.[9]

Rising oxygen concentrations have been cited as a driver for evolutionary diversification, although the physiological arguments behind such arguments are questionable, and a consistent pattern between oxygen concentrations and the rate of evolution is not clearly evident.[9] The most celebrated link between oxygen and evolution occurs at the end of the last of the Snowball glaciations, where complex multicellular life is first found in the fossil record. Under low oxygen concentrations and before the evolution of nitrogen fixation, biologically-available nitrogen compounds were in limited supply [13] and periodic "nitrogen crises" could render the ocean inhospitable to life.[9] Significant concentrations of oxygen were just one of the prerequisites for the evolution of complex life.[9] Models based on uniformitarian principles (i.e. extrapolating present-day ocean dynamics into deep time) suggest that such a concentration was only reached immediately before metazoa first appeared in the fossil record.[9] Further, anoxic or otherwise chemically "nasty" oceanic conditions that resemble those supposed to inhibit macroscopic life re-occur at intervals through the early Cambrian, and also in the late Cretaceous – with no apparent effect on lifeforms at these times.[9] This might suggest that the geochemical signatures found in ocean sediments reflect the atmosphere in a different way before the Cambrian - perhaps as a result of the fundamentally different mode of nutrient cycling in the absence of planktivory.[7][9]

Friday, April 14, 2017

Action at a distance

In physics, action at a distance is the concept that an object can be moved, changed, or otherwise affected without being physically touched (as in mechanical contact) by another object. That is, it is the nonlocal interaction of objects that are separated in space. Pioneering physicist Albert Einstein described the phenomenon as "spooky action at a distance".[1]

This term was used most often in the context of early theories of gravity and electromagnetism to describe how an object responds to the influence of distant objects. For example, Coulomb's law and the law of universal gravitation are such early theories.

More generally "action at a distance" describes the failure of early atomistic and mechanistic theories which sought to reduce all physical interaction to collision. The exploration and resolution of this problematic phenomenon led to significant developments in physics, from the concept of a field, to descriptions of quantum entanglement and the mediator particles of the Standard Model.[2]

Electricity and magnetism

Efforts to account for action at a distance in the theory of electromagnetism led to the development of the concept of a field which mediated interactions between currents and charges across empty space.

According to field theory we account for the Coulomb (electrostatic) interaction between charged particles through the fact that charges produce around themselves an electric field, which can be felt by other charges as a force. Maxwell directly addressed the subject of action-at-a-distance in chapter 23 of his A Treatise on Electricity and Magnetism in 1873.[3] He began by reviewing the explanation of Ampere's formula given by Gauss and Weber. On page 437 he indicates the physicists' disgust with action at a distance. In 1845 Gauss wrote to Weber desiring "action, not instantaneous, but propagated in time in a similar manner to that of light." This aspiration was developed by Maxwell with the theory of an electromagnetic field described by Maxwell's equations, which used the field to elegantly account for all electromagnetic interactions, as well as light (which, until then, had been seen as a completely unrelated phenomenon). In Maxwell's theory, the field is its own physical entity, carrying momenta and energy across space, and action-at-a-distance is only the apparent effect of local interactions of charges with their surrounding field.

Electrodynamics was later described without fields (in Minkowski space) as the direct interaction of particles with lightlike separation vectors[dubious ]. This resulted in the Fokker-Tetrode-Schwarzschild action integral. This kind of electrodynamic theory is often called "direct interaction" to distinguish it from field theories where action at a distance is mediated by a localized field (localized in the sense that its dynamics are determined by the nearby field parameters).[4] This description of electrodynamics, in contrast with Maxwell's theory, explains apparent action at a distance not by postulating a mediating entity (the field) but by appealing to the natural geometry of special relativity.

Direct interaction electrodynamics is explicitly symmetrical in time, and avoids the infinite energy predicted in the field immediately surrounding point particles. Feynman and Wheeler have shown that it can account for radiation and radiative damping (which had been considered strong evidence for the independent existence of the field). However various proofs, beginning with that of Dirac have shown that direct interaction theories (under reasonable assumptions) do not admit Lagrangian or Hamiltonian formulations (these are the so-called No Interaction Theorems). Also significant is the measurement and theoretical description of the Lamb shift which strongly suggests that charged particles interact with their own field. Fields, because of these and other difficulties, have been elevated to the fundamental operators in QFT and modern physics has thus largely abandoned direct interaction theory.

Gravity

Newton

Newton's theory of gravity offered no prospect of identifying any mediator of gravitational interaction. His theory assumed that gravitation acts instantaneously, regardless of distance. Kepler's observations gave strong evidence that in planetary motion angular momentum is conserved. (The mathematical proof is only valid in the case of a Euclidean geometry.) Gravity is also known as a force of attraction between two objects because of their mass.

From a Newtonian perspective, action at a distance can be regarded as: "a phenomenon in which a change in intrinsic properties of one system induces a change in the intrinsic properties of a distant system, independently of the influence of any other systems on the distant system, and without there being a process that carries this influence contiguously in space and time" (Berkovitz 2008).[5]

A related question, raised by Ernst Mach, was how rotating bodies know how much to bulge at the equator. This, it seems, requires an action-at-a-distance from distant matter, informing the rotating object about the state of the universe. Einstein coined the term Mach's principle for this question.
It is inconceivable that inanimate Matter should, without the Mediation of something else, which is not material, operate upon, and affect other matter without mutual Contact…That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro' a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it. Gravity must be caused by an Agent acting constantly according to certain laws; but whether this Agent be material or immaterial, I have left to the Consideration of my readers.[5]
— Isaac Newton, Letters to Bentley, 1692/3

Einstein

According to Albert Einstein's theory of special relativity, instantaneous action at a distance was seen to violate the relativistic upper limit on speed of propagation of information. If one of the interacting objects were to suddenly be displaced from its position, the other object would feel its influence instantaneously, meaning information had been transmitted faster than the speed of light.

One of the conditions that a relativistic theory of gravitation must meet is to be mediated with a speed that does not exceed c, the speed of light in a vacuum. It could be seen from the previous success of electrodynamics that the relativistic theory of gravitation would have to use the concept of a field or something similar.

This problem has been resolved by Einstein's theory of general relativity in which gravitational interaction is mediated by deformation of space-time geometry. Matter warps the geometry of space-time and these effects are, as with electric and magnetic fields, propagated at the speed of light. Thus, in the presence of matter, space-time becomes non-Euclidean, resolving the apparent conflict between Newton's proof of the conservation of angular momentum and Einstein's theory of special relativity. Mach's question regarding the bulging of rotating bodies is resolved because local space-time geometry is informing a rotating body about the rest of the universe. In Newton's theory of motion, space acts on objects, but is not acted upon. In Einstein's theory of motion, matter acts upon space-time geometry, deforming it, and space-time geometry acts upon matter, by affecting the behavior of geodesics.

Gravitational waves were first detected by the Advanced LIGO on 14 September 2015, finally giving scientific measurements to validate this 100-year old theory.[6]

Quantum mechanics

Since the early twentieth century, quantum mechanics has posed new challenges for the view that physical processes should obey locality. Whether quantum entanglement counts as action-at-a-distance hinges on the nature of the wave function and decoherence, issues over which there is still considerable debate among scientists and philosophers. One important line of debate originated with Einstein, who challenged the idea that quantum mechanics offers a complete description of reality, along with Boris Podolsky and Nathan Rosen. They proposed a thought experiment involving an entangled pair of observables with non-commuting operators (e.g. position and momentum).[7]
This thought experiment, which came to be known as the EPR paradox, hinges on the principle of locality. A common presentation of the paradox is as follows: two particles interact and fly off in opposite directions. Even when the particles are so far apart that any classical interaction would be impossible (see principle of locality), a measurement of one particle nonetheless determines the corresponding result of a measurement of the other.

After the EPR paper, several scientists such as de Broglie studied local hidden variables theories. In the 1960s John Bell derived an inequality that indicated a testable difference between the predictions of quantum mechanics and local hidden variables theories.[8] To date, all experiments testing Bell-type inequalities in situations analogous to the EPR thought experiment have results consistent with the predictions of quantum mechanics, suggesting that local hidden variables theories can be ruled out. Whether or not this is interpreted as evidence for nonlocality depends on one's interpretation of quantum mechanics.[5]

Non-standard interpretations of quantum mechanics vary in their response to the EPR-type experiments. The Bohm interpretation gives an explanation based on nonlocal hidden variables for the correlations seen in entanglement. Many advocates of the many-worlds interpretation argue that it can explain these correlations in a way that does not require a violation of locality,[9] by allowing measurements to have non-unique outcomes.

Nothing

Nothing is a concept denoting the absence of something, and is associated with nothingness.[1] In nontechnical uses, nothing denotes things lacking importance, interest, value, relevance, or significance.[1] Nothingness is the state of being nothing,[2] the state of nonexistence of anything, or the property of having nothing.

Philosophy

Western philosophy

Some would consider the study of "nothing" to be foolish. A typical response of this type is voiced by Giacomo Casanova (1725–1798) in conversation with his landlord, one Dr. Gozzi, who also happens to be a priest:
However, "nothingness" has been treated as a serious subject for a very long time. In philosophy, to avoid linguistic traps over the meaning of "nothing", a phrase such as not-being is often employed to make clear what is being discussed.

Parmenides

One of the earliest western philosophers to consider nothing as a concept was Parmenides (5th century BC), who was a Greek philosopher of the monist school. He argued that "nothing" cannot exist by the following line of reasoning: To speak of a thing, one has to speak of a thing that exists. Since we can speak of a thing in the past, this thing must still exist (in some sense) now, and from this he concludes that there is no such thing as change. As a corollary, there can be no such things as coming-into-being, passing-out-of-being, or not-being.[4]

Parmenides was taken seriously by other philosophers, influencing, for instance, Socrates and Plato.[5] Aristotle gives Parmenides serious consideration but concludes; "Although these opinions seem to follow logically in a dialectical discussion, yet to believe them seems next door to madness when one considers the facts."[6]
Leucippus
Leucippus (early 5th century BC), one of the atomists, along with other philosophers of his time, made attempts to reconcile this monism with the everyday observation of motion and change. He accepted the monist position that there could be no motion without a void. The void is the opposite of being. It is not-being. On the other hand, there exists something known as an absolute plenum, a space filled with matter, and there can be no motion in a plenum because it is completely full. But, there is not just one monolithic plenum, for existence consists of a multiplicity of plenums. These are the invisibly small "atoms" of Greek atomist theory, later expanded by Democritus (circa 460 BC – 370 BC), which allows the void to "exist" between them. In this scenario, macroscopic objects can come-into-being, move through space, and pass into not-being by means of the coming together and moving apart of their constituent atoms. The void must exist to allow this to happen, or else the "frozen world" of Parmenides must be accepted.

Bertrand Russell points out that this does not exactly defeat the argument of Parmenides but, rather, ignores it by taking the rather modern scientific position of starting with the observed data (motion, etc.) and constructing a theory based on the data, as opposed to Parmenides' attempts to work from pure logic. Russell also observes that both sides were mistaken in believing that there can be no motion in a plenum, but arguably motion cannot start in a plenum.[7] Cyril Bailey notes that Leucippus is the first to say that a "thing" (the void) might be real without being a body and points out the irony that this comes from a materialistic atomist. Leucippus is therefore the first to say that "nothing" has a reality attached to it.[8]

Aristotle, Newton, Descartes

Aristotle (384–322 BC) provided the classic escape from the logical problem posed by Parmenides by distinguishing things that are matter and things that are space. In this scenario, space is not "nothing" but, rather, a receptacle in which objects of matter can be placed. The true void (as "nothing") is different from "space" and is removed from consideration.[9][10]

This characterisation of space reached its pinnacle with Isaac Newton who asserted the existence of absolute space.

René Descartes, on the other hand, returned to a Parmenides-like argument of denying the existence of space. For Descartes, there was matter, and there was extension of matter leaving no room for the existence of "nothing".[11]

The idea that space can actually be empty was generally still not accepted by philosophers who invoked arguments similar to the plenum reasoning. Although Descartes views on this were challenged by Blaise Pascal, he declined to overturn the traditional belief, commonly stated in the form "Nature abhors a vacuum". This remained so until Evangelista Torricelli invented the barometer in 1643 and showed that an empty space appeared if the mercury tube was turned upside down. This phenomenon being known as the Torricelli vacuum and the unit of vacuum pressure, the torr, being named after him. Even Torricelli's teacher, the famous Galileo Galilei had previously been unable to adequately explain the sucking action of a pump.[12]

John the Scot

John the Scot, or Johannes Scotus Eriugena (c. 815–877) held many surprisingly heretical beliefs for the time he lived in for which no action appears ever to have been taken against him. His ideas mostly stem from, or are based on his work of translating pseudo-Dionysius. His beliefs are essentially pantheist and he classifies evil, amongst many other things, into not-being. This is done on the grounds that evil is the opposite of good, a quality of God, but God can have no opposite, since God is everything in the pantheist view of the world. Similarly, the idea that God created the world out of "nothing" is to be interpreted as meaning that the "nothing" here is synonymous with God.[13]

G. W. F. Hegel

Georg Wilhelm Friedrich Hegel (1770–1831) is the philosopher who brought the dialectical method to a new pinnacle of development. According to Hegel in Science of Logic the dialectical methods consists of three steps. First, a thesis is given, which can be any proposition in logic. Second, the antithesis of the thesis is formed and, finally, a synthesis incorporating both thesis and antithesis. Hegel believed that no proposition taken by itself can be completely true. Only the whole can be true, and the dialectical synthesis was the means by which the whole could be examined in relation to a specific proposition. Truth consists of the whole process. Separating out thesis, antithesis, or synthesis as a stand-alone statement results in something that is in some way or other untrue. The concept of "nothing" arises in Hegel right at the beginning of his Logic. The whole is called by Hegel the "Absolute" and is to be viewed as something spiritual. Hegel then has:[14]

Existentialists

The most prominent figure among the existentialists is Jean-Paul Sartre, whose ideas in his book Being and Nothingness (L'être et le néant) are heavily influenced by Being and Time (Sein und Zeit) of Martin Heidegger, although Heidegger later stated that he was misunderstood by Sartre.[15] Sartre defines two kinds of "being" (être). One kind is être-en-soi, the brute existence of things such as a tree. The other kind is être-pour-soi which is consciousness. Sartre claims that this second kind of being is "nothing" since consciousness cannot be an object of consciousness and can possess no essence.[16] Sartre, and even more so, Jaques Lacan, use this conception of nothing as the foundation of their atheist philosophy. Equating nothingness with being leads to creation from nothing and hence God is no longer needed for there to be existence.[17]

Eastern philosophy

The understanding of 'nothing' varies widely between cultures, especially between Western and Eastern cultures and philosophical traditions. For instance, Śūnyatā (emptiness), unlike "nothingness", is considered to be a state of mind in some forms of Buddhism (see Nirvana, mu, and Bodhi). Achieving 'nothing' as a state of mind in this tradition allows one to be totally focused on a thought or activity at a level of intensity that they would not be able to achieve if they were consciously thinking. A classic example of this is an archer attempting to erase the mind and clear the thoughts to better focus on the shot. Some authors have pointed to similarities between the Buddhist conception of nothingness and the ideas of Martin Heidegger and existentialists like Sartre,[18][19] although this connection has not been explicitly made by the philosophers themselves.

In some Eastern philosophies, the concept of "nothingness" is characterized by an egoless state of being in which one fully realizes one's own small part in the cosmos.

The Kyoto School handles the concept of nothingness as well.

Computing

In computing, "nothing" can be a keyword (in VB.Net) used in place of something unassigned, a data abstraction. Although a computer's storage hardware always contains numbers, "nothing" symbolizes a number skipped by the system when the programmer desires. Many systems have similar capabilities but different keywords, such as "null", "NUL", "nil", and "None".[20]
To instruct a computer processor to do nothing, a keyword such as "NOP" may be available. This is a control abstraction; a processor that executes NOP will behave identically to a processor that does not process this directive.[21]

Wednesday, April 12, 2017

Human enhancement

From Wikipedia, the free encyclopedia

This electrically powered exoskeleton suit has been in development by researchers at the Tsukuba University of Japan.

Human enhancement (Augment) is "any attempt to temporarily or permanently overcome the current limitations of the human body through natural or artificial means. It is the use of technological means to select or alter human characteristics and capacities, whether or not the alteration results in characteristics and capacities that lie beyond the existing human range."[1][2][3]

Technologies

Human enhancement technologies (HET) are techniques that can be used not simply for treating illness and disability, but also for enhancing human characteristics and capacities.[4] The expression "human enhancement technologies" is relative to emerging technologies and converging technologies.[5] In some circles, the expression "human enhancement" is roughly synonymous with human genetic engineering,[6][7] it is used most often to refer to the general application of the convergence of nanotechnology, biotechnology, information technology and cognitive science (NBIC) to improve human performance.[5]

According to the National Intelligence Council's Global Trends 2030 report "human augmentation could allow civilian and military people to work more effectively, and in environments that were previously inaccessible". It states that "future retinal eye implants could enable night vision, and neuro-enhancements could provide superior memory recall or speed of thought. Neuro-pharmaceuticals will allow people to maintain concentration for longer periods of time or enhance their learning abilities. Augmented reality systems can provide enhanced experiences of real-world situations."[8]

In terms of technological enhancements, Kevin Warwick lists the possibilities as enhanced memory, enhanced communication, enhanced senses, multi-dimensional thinking, extending the body, in built machine thinking, outsourcing memory, enhanced maths + speed of thinking + problem solving.,[9] He also states that "a person's brain and body do not have to be in the same place".[10]

Existing technologies

Emerging technologies

Speculative technologies

  • Mind uploading, the hypothetical process of "transferring"/"uploading" or copying a conscious mind from a brain to a non-biological substrate by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device.
  • Exocortex, a theoretical artificial external information processing system that would augment a brain's biological high-level cognitive processes.
  • Endogenous artificial nutrition, such as having a radioisotope generator that resynthesizes glucose (similarly to photosynthesis), amino acids and vitamins from their degradation products, theoretically availing for weeks without food if necessary.

Ethics

While in some circles the expression "human enhancement" is roughly synonymous with human genetic engineering,[6][7] it is used most often to refer to the general application of the convergence of nanotechnology, biotechnology, information technology and cognitive science (NBIC) to improve human performance.[5]
Since the 1990s, several academics (such as some of the fellows of the Institute for Ethics and Emerging Technologies[14]) have risen to become advocates of the case for human enhancement while other academics (such as the members of President Bush's Council on Bioethics[15]) have become outspoken critics.[16]

Advocacy of the case for human enhancement is increasingly becoming synonymous with “transhumanism”, a controversial ideology and movement which has emerged to support the recognition and protection of the right of citizens to either maintain or modify their own minds and bodies; so as to guarantee them the freedom of choice and informed consent of using human enhancement technologies on themselves and their children.[17]

Neuromarketing consultant Zack Lynch argues that neurotechnologies will have a more immediate effect on society than gene therapy and will face less resistance as a pathway of radical human enhancement. He also argues that the concept of "enablement" needs to be added to the debate over "therapy" versus "enhancement".[18]

Although many proposals of human enhancement rely on fringe science, the very notion and prospect of human enhancement has sparked public controversy.[19][20][21]

Dale Carrico wrote that "human enhancement" is a loaded term which has eugenic overtones because it may imply the improvement of human hereditary traits to attain a universally accepted norm of biological fitness (at the possible expense of human biodiversity and neurodiversity), and therefore can evoke negative reactions far beyond the specific meaning of the term. Furthermore, Carrico wrote that enhancements which are self-evidently good, like "fewer diseases", are more the exception than the norm and even these may involve ethical tradeoffs, as the controversy about ADHD arguably demonstrates.[22]

However, the most common criticism of human enhancement is that it is or will often be practiced with a reckless and selfish short-term perspective that is ignorant of the long-term consequences on individuals and the rest of society, such as the fear that some enhancements will create unfair physical or mental advantages to those who can and will use them, or unequal access to such enhancements can and will further the gulf between the "haves" and "have-nots".[23][24][25][26] Futurist Ray Kurzweil has shown some concern that, within the century, humans may be required to merge with this technology in order to compete in the marketplace.[citation needed]

Other critics of human enhancement fear that such capabilities would change, for the worse, the dynamic relations within a family. Given the choices of superior qualities, parents make their child as opposed to merely birthing it, and the newborn becomes a product of their will rather than a gift of nature to be loved unconditionally. This is problematic because it could harm the unconditional love a parent ought give their child, and it could furthermore lead to serious disappointment if the child does not fulfill its engineered role.[27]

Accordingly, some advocates, who want to use more neutral language, and advance the public interest in so-called "human enhancement technologies", prefer the term "enablement" over "enhancement";[28] defend and promote rigorous, independent safety testing of enabling technologies; as well as affordable, universal access to these technologies.[16]

Inequality and social disruption

Some believe that the ability to enhance one's self would reflect the overall goal of human life: to improve fitness and survivability. They claim that it is human nature to want to better ourselves via increased life expectancy, strength, and/or intelligence, and to become less fearful and more independent.[29] In today's world, however, there are stratification among socioeconomic classes that prevent the less wealthy from accessing these enhancements. The advantage gained by one person's enhancements implies a disadvantage to an unenhanced person.[30][8] Human enhancements present a great debate on the equality between the haves and the have-nots. A modern-day example of this would be LASIK eye surgery, which only the wealthy can afford.

The enhancement of the human body could have profound changes to everyday situations. Sports, for instance, would change dramatically if enhanced people were allowed to compete; there would be a clear disadvantage for those who are not enhanced.[30] In regards to economic programs, human enhancements would greatly increase life expectancy which would require employers to either adjust their pension programs to compensate for a longer retirement term, or delay retirement age another ten years or so. When considering birth rates into this equation, if there is no decline with increased longevity, this could put more pressure on resources like energy and food availability. A job candidate enhanced with a neural transplant that heightens their ability to compute and retain information, would outcompete someone who is not enhanced. Another scenario might be a person with a hearing or sight enhancement could intrude on privacy laws or expectations in an environment like a classroom or workplace. These enhancements could go undetected and give individuals an overall advantage.

Unfairness in those who receive enhancements and those who do not is a cause for concern. Although it should be noted that unfairness already exists within our society without the need for human enhancement.[31] An individual taking a math exam may have a better calculator than another, or a better suit at a job interview. The long-term physical advantage through genetic engineering or short-term cognitive advantage of nootropics may be part of a greater issue. The real issue being that of availability.[32] How easy it is for certain individuals to get a hold of such enhancements depending on their socioeconomic standing.

Geoffrey Miller claims that 21st century Chinese eugenics may allow the Chinese to increase the IQ of each subsequent generation by five to fifteen IQ points, and after a couple generations it "would be game over for Western global competitiveness." Miller recommends that we put aside our "self-righteous" Euro-American ideological biases and learn from the Chinese.[33]

Effects on identity

Human enhancement technologies can impact human identity by affecting one's self-conception.[34] This is problematic because enhancement technologies threaten to alter the self fundamentally to the point where the result is a different and inauthentic person.[citation needed] For example, extreme changes in personality may affect the individual's relationships because others can no longer relate to the new person.[26]

Cryogenics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cryogenics...