Search This Blog

Tuesday, July 24, 2018

Grid computing

From Wikipedia, the free encyclopedia

Grid computing is the collection of computer resources from multiple places to reach a common goal. The grid can be thought of as a distributed system with non-interactive workloads that involve a large number of files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed (thus not physically coupled) than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. Grid sizes can be quite large''.

Grids are a form of distributed computing whereby a "super virtual computer" is composed of many networked loosely coupled computers acting together to perform large tasks. For certain applications, distributed or grid computing can be seen as a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a computer network (private or public) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus.

Overview

Grid computing combines computers from multiple administrative domains to reach a common goal,[3] to solve a single task, and may then disappear just as quickly.

The size of a grid may vary from small—confined to a network of computer workstations within a corporation, for example—to large, public collaborations across many companies and networks. "The notion of a confined grid may also be known as an intra-nodes cooperation whilst the notion of a larger, wider grid may thus refer to an inter-nodes cooperation".[4]

Grids are a form of distributed computing whereby a “super virtual computer” is composed of many networked loosely coupled computers acting together to perform very large tasks. This technology has been applied to computationally intensive scientific, mathematical, and academic problems through volunteer computing, and it is used in commercial enterprises for such diverse applications as drug discovery, economic forecasting, seismic analysis, and back office data processing in support for e-commerce and Web services.

Coordinating applications on Grids can be a complex task, especially when coordinating the flow of information across distributed computing resources. Grid workflow systems have been developed as a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in the Grid context.

Comparison of grids and conventional supercomputers

“Distributed” or “grid” computing in general is a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a network (private, public or the Internet) by a conventional network interface producing commodity hardware, compared to the lower efficiency of designing and constructing a small number of custom supercomputers. The primary performance disadvantage is that the various processors and local storage areas do not have high-speed connections. This arrangement is thus well-suited to applications in which multiple parallel computations can take place independently, without the need to communicate intermediate results between processors.[5] The high-end scalability of geographically dispersed grids is generally favorable, due to the low need for connectivity between nodes relative to the capacity of the public Internet.[citation needed]

There are also some differences in programming and MC. It can be costly and difficult to write programs that can run in the environment of a supercomputer, which may have a custom operating system, or require the program to address concurrency issues. If a problem can be adequately parallelized, a “thin” layer of “grid” infrastructure can allow conventional, standalone programs, given a different part of the same problem, to run on multiple machines. This makes it possible to write and debug on a single conventional machine and eliminates complications due to multiple instances of the same program running in the same shared memory and storage space at the same time.

Design considerations and variations

One feature of distributed grids is that they can be formed from computing resources belonging to one or more multiple individuals or organizations (known as multiple administrative domains). This can facilitate commercial transactions, as in utility computing, or make it easier to assemble volunteer computing networks.

One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or erroneous results, and from using the system as an attack vector. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. Discrepancies would identify malfunctioning and malicious nodes. However, due to the lack of central control over the hardware, there is no way to guarantee that nodes will not drop out of the network at random times. Some nodes (like laptops or dial-up Internet customers) may also be available for computation but not network communications for unpredictable periods. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results in expected time.

The impacts of trust and availability on performance and development difficulty can influence the choice of whether to deploy onto a dedicated cluster, to idle machines internal to the developing organization, or to an open external network of volunteers or contractors. In many cases, the participating nodes must trust the central system not to abuse the access that is being granted, by interfering with the operation of other programs, mangling stored information, transmitting private data, or creating new security holes. Other systems employ measures to reduce the amount of trust “client” nodes must place in the central system such as placing applications in virtual machines.

Public systems or those crossing administrative domains (including different departments in the same organization) often result in the need to run on heterogeneous systems, using different operating systems and hardware architectures. With many languages, there is a trade-off between investment in software development and the number of platforms that can be supported (and thus the size of the resulting network). Cross-platform languages can reduce the need to make this tradeoff, though potentially at the expense of high performance on any given node (due to run-time interpretation or lack of optimization for the particular platform). Various middleware projects have created generic infrastructure to allow diverse scientific and commercial projects to harness a particular associated grid or for the purpose of setting up new grids. BOINC is a common one for various academic projects seeking public volunteers; more are listed at the end of the article.

In fact, the middleware can be seen as a layer between the hardware and the software. On top of the middleware, a number of technical areas have to be considered, and these may or may not be middleware independent. Example areas include SLA management, Trust, and Security, Virtual organization management, License Management, Portals and Data Management. These technical areas may be taken care of in a commercial solution, though the cutting edge of each area is often found within specific research projects examining the field.

Market segmentation of the grid computing market

For the segmentation of the grid computing market, two perspectives need to be considered: the provider side and the user side:

The provider side

The overall grid market comprises several specific markets. These are the grid middleware market, the market for grid-enabled applications, the utility computing market, and the software-as-a-service (SaaS) market.

Grid middleware is a specific software product, which enables the sharing of heterogeneous resources, and Virtual Organizations. It is installed and integrated into the existing infrastructure of the involved company or companies and provides a special layer placed among the heterogeneous infrastructure and the specific user applications. Major grid middlewares are Globus Toolkit, gLite, and UNICORE.

Utility computing is referred to as the provision of grid computing and applications as service either as an open grid utility or as a hosting solution for one organization or a VO. Major players in the utility computing market are Sun Microsystems, IBM, and HP.

Grid-enabled applications are specific software applications that can utilize grid infrastructure. This is made possible by the use of grid middleware, as pointed out above.

Software as a service (SaaS) is “software that is owned, delivered and managed remotely by one or more providers.” (Gartner 2007) Additionally, SaaS applications are based on a single set of common code and data definitions. They are consumed in a one-to-many model, and SaaS uses a Pay As You Go (PAYG) model or a subscription model that is based on usage. Providers of SaaS do not necessarily own the computing resources themselves, which are required to run their SaaS. Therefore, SaaS providers may draw upon the utility computing market. The utility computing market provides computing resources for SaaS providers.

The user side

For companies on the demand or user side of the grid computing market, the different segments have significant implications for their IT deployment strategy. The IT deployment strategy as well as the type of IT investments made are relevant aspects for potential grid users and play an important role for grid adoption.

CPU scavenging

CPU-scavenging, cycle-scavenging, or shared computing creates a “grid” from the unused resources in a network of participants (whether worldwide or internal to an organization). Typically this technique uses a desktop computer instruction cycles that would otherwise be wasted at night, during lunch, or even in the scattered seconds throughout the day when the computer is waiting for user input on relatively fast devices. In practice, participating computers also donate some supporting amount of disk storage space, RAM, and network bandwidth, in addition to raw CPU power.

Many volunteer computing projects, such as BOINC, use the CPU scavenging model. Since nodes are likely to go "offline" from time to time, as their owners use their resources for their primary purpose, this model must be designed to handle such contingencies.

Creating an Opportunistic Environment is another implementation of CPU-scavenging where special workload management system harvests the idle desktop computers for compute-intensive jobs, it also refers as Enterprise Desktop Grid (EDG). For instance, HTCondor [6] the open-source high-throughput computing software framework for coarse-grained distributed rationalization of computationally intensive tasks can be configured to only use desktop machines where the keyboard and mouse are idle to effectively harness wasted CPU power from otherwise idle desktop workstations. Like other full-featured batch systems, HTCondor provides a job queueing mechanism, scheduling policy, priority scheme, resource monitoring, and resource management. It can be used to manage workload on a dedicated cluster of computers as well or it can seamlessly integrate both dedicated resources (rack-mounted clusters) and non-dedicated desktop machines (cycle scavenging) into one computing environment.

History

The term grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid. The power grid metaphor for accessible computing quickly became canonical when Ian Foster and Carl Kesselman published their seminal work, "The Grid: Blueprint for a new computing infrastructure" (1999). This was preceded by decades by the metaphor of utility computing (1961): computing as a public utility, analogous to the phone system.[7][8]

CPU scavenging and volunteer computing were popularized beginning in 1997 by distributed.net and later in 1999 by SETI@home to harness the power of networked PCs worldwide, in order to solve CPU-intensive research problems.[9][10]

The ideas of the grid (including those from distributed computing, object-oriented programming, and Web services) were brought together by Ian Foster and Steve Tuecke of the University of Chicago, and Carl Kesselman of the University of Southern California's Information Sciences Institute. The trio, who led the effort to create the Globus Toolkit, is widely regarded as the "fathers of the grid".[11] The toolkit incorporates not just computation management but also storage management, security provisioning, data movement, monitoring, and a toolkit for developing additional services based on the same infrastructure, including agreement negotiation, notification mechanisms, trigger services, and information aggregation. While the Globus Toolkit remains the de facto standard for building grid solutions, a number of other tools have been built that answer some subset of services needed to create an enterprise or global grid.[12]

In 2007 the term cloud computing came into popularity, which is conceptually similar to the canonical Foster definition of grid computing (in terms of computing resources being consumed as electricity is from the power grid) and earlier utility computing. Indeed, grid computing is often (but not always) associated with the delivery of cloud computing systems as exemplified by the AppLogic system from 3tera.[citation needed]

Progress

In November 2006, Seidel received the Sidney Fernbach Award at the Supercomputing Conference in Tampa, Florida.[13] "For outstanding contributions to the development of software for HPC and Grid computing to enable the collaborative numerical investigation of complex problems in physics; in particular, modeling black hole collisions."[14] This award, which is one of the highest honors in computing, was awarded for his achievements in numerical relativity.

Fastest virtual supercomputers

Also, As of October 2016, the Bitcoin Network had computing power claimed to be equivalent to 21,247,253.65 PFLOPS (Floating-point Operations Per Second).[21] However, the elements of that network can perform only one specific cryptographic hash computation required by the bitcoin protocol. They cannot perform general floating-point arithmetic operations, therefore their computing power cannot be measured in FLOPS.[further explanation needed]

Projects and applications

Grid computing offers a way to solve Grand Challenge problems such as protein folding, financial modeling, earthquake simulation, and climate/weather modeling. Grids offer a way of using the information technology resources optimally inside an organization. They also provide a means for offering information technology as a utility for commercial and noncommercial clients, with those clients paying only for what they use, as with electricity or water.

Grid computing is being applied by the National Science Foundation's National Technology Grid, NASA's Information Power Grid, Pratt & Whitney, Bristol-Myers Squibb Co., and American Express.

As of October 2016, over 4 million machines running the open-source Berkeley Open Infrastructure for Network Computing (BOINC) platform are members of the World Community Grid.[15] One of the projects using BOINC is SETI@home, which was using more than 400,000 computers to achieve 0.828 TFLOPS as of October 2016. As of October 2016 Folding@home, which is not part of BOINC, achieved more than 101 x86-equivalent petaflops on over 110,000 machines.[16]

The European Union funded projects through the framework programmes of the European Commission. BEinGRID (Business Experiments in Grid) was a research project funded by the European Commission[22] as an Integrated Project under the Sixth Framework Programme (FP6) sponsorship program. Started on June 1, 2006, the project ran 42 months, until November 2009. The project was coordinated by Atos Origin. According to the project fact sheet, their mission is “to establish effective routes to foster the adoption of grid computing across the EU and to stimulate research into innovative business models using Grid technologies”. To extract best practice and common themes from the experimental implementations, two groups of consultants are analyzing a series of pilots, one technical, one business. The project is significant not only for its long duration but also for its budget, which at 24.8 million Euros, is the largest of any FP6 integrated project. Of this, 15.7 million is provided by the European Commission and the remainder by its 98 contributing partner companies. Since the end of the project, the results of BEinGRID have been taken up and carried forward by IT-Tude.com.

The Enabling Grids for E-sciencE project, based in the European Union and included sites in Asia and the United States, was a follow-up project to the European DataGrid (EDG) and evolved into the European Grid Infrastructure. This, along with the LHC Computing Grid[23] (LCG), was developed to support experiments using the CERN Large Hadron Collider. A list of active sites participating within LCG can be found online[24] as can real time monitoring of the EGEE infrastructure.[25] The relevant software and documentation is also publicly accessible.[26] There is speculation that dedicated fiber optic links, such as those installed by CERN to address the LCG's data-intensive needs, may one day be available to home users thereby providing internet services at speeds up to 10,000 times faster than a traditional broadband connection.[27] The European Grid Infrastructure has been also used for other research activities and experiments such as the simulation of oncological clinical trials.[28]

The distributed.net project was started in 1997. The NASA Advanced Supercomputing facility (NAS) ran genetic algorithms using the Condor cycle scavenger running on about 350 Sun Microsystems and SGI workstations.

In 2001, United Devices operated the United Devices Cancer Research Project based on its Grid MP product, which cycle-scavenges on volunteer PCs connected to the Internet. The project ran on about 3.1 million machines before its close in 2007.[29]

Definitions

Today there are many definitions of grid computing:
  • In his article “What is the Grid? A Three Point Checklist”,[3] Ian Foster lists these primary attributes:
  • Plaszczak/Wellner[30] define grid technology as "the technology that enables resource virtualization, on-demand provisioning, and service (resource) sharing between organizations."
  • IBM defines grid computing as “the ability, using a set of open standards and protocols, to gain access to applications and data, processing power, storage capacity and a vast array of other computing resources over the Internet. A grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of resources distributed across ‘multiple’ administrative domains based on their (resources) availability, capacity, performance, cost and users' quality-of-service requirements”.[31]
  • An earlier example of the notion of computing as the utility was in 1965 by MIT's Fernando Corbató. Corbató and the other designers of the Multics operating system envisioned a computer facility operating “like a power company or water company”.[32]
  • Buyya/Venugopal[33] define grid as "a type of parallel and distributed system that enables the sharing, selection, and aggregation of geographically distributed autonomous resources dynamically at runtime depending on their availability, capability, performance, cost, and users' quality-of-service requirements".
  • CERN, one of the largest users of grid technology, talk of The Grid: “a service for sharing computer power and data storage capacity over the Internet.”[34]

Virtual screening

From Wikipedia, the free encyclopedia
 
Figure 1. Flow Chart of Virtual Screening[1]

Virtual screening (VS) is a computational technique used in drug discovery to search libraries of small molecules in order to identify those structures which are most likely to bind to a drug target, typically a protein receptor or enzyme.

Virtual screening has been defined as the "automatically evaluating very large libraries of compounds" using computer programs.[4] As this definition suggests, VS has largely been a numbers game focusing on how the enormous chemical space of over 1060 conceivable compounds[5] can be filtered to a manageable number that can be synthesized, purchased, and tested. Although searching the entire chemical universe may be a theoretically interesting problem, more practical VS scenarios focus on designing and optimizing targeted combinatorial libraries and enriching libraries of available compounds from in-house compound repositories or vendor offerings. As the accuracy of the method has increased, virtual screening has become an integral part of the drug discovery process.[6][1] Virtual Screening can be used to select in house database compounds for screening, choose compounds that can be purchased externally, and to choose which compound should be synthesized next.

Methods

There are two broad categories of screening techniques: ligand-based and structure-based.[7] The remainder of this page will reflect Figure 1 Flow Chart of Virtual Screening.

Ligand-based

Given a set of structurally diverse ligands that binds to a receptor, a model of the receptor can be built by exploiting the collective information contained in such set of ligands. These are known as pharmacophore models. A candidate ligand can then be compared to the pharmacophore model to determine whether it is compatible with it and therefore likely to bind.[8]

Another approach to ligand-based virtual screening is to use 2D chemical similarity analysis methods[9] to scan a database of molecules against one or more active ligand structure.

A popular approach to ligand-based virtual screening is based on searching molecules with shape similar to that of known actives, as such molecules will fit the target's binding site and hence will be likely to bind the target. There are a number of prospective applications of this class of techniques in the literature.[10][11] Pharmacophoric extensions of these 3D methods are also freely-available as webservers.[12][13]

Structure-based

Structure-based virtual screening involves docking of candidate ligands into a protein target followed by applying a scoring function to estimate the likelihood that the ligand will bind to the protein with high affinity.[14][15][16] Webservers oriented to prospective virtual screening are available to all.[17][18]

Computing Infrastructure

The computation of pair-wise interactions between atoms, which is a prerequisite for the operation of many virtual screening programs, is of O(N^{2}) computational complexity, where N is the number of atoms in the system. Because of the quadratic scaling with respect to the number of atoms, the computing infrastructure may vary from a laptop computer for a ligand-based method to a mainframe for a structure-based method.

Ligand-based

Ligand-based methods typically require a fraction of a second for a single structure comparison operation. A single CPU is enough to perform a large screening within hours. However, several comparisons can be made in parallel in order to expedite the processing of a large database of compounds.

Structure-based

The size of the task requires a parallel computing infrastructure, such as a cluster of Linux systems, running a batch queue processor to handle the work, such as Sun Grid Engine or Torque PBS.

A means of handling the input from large compound libraries is needed. This requires a form of compound database that can be queried by the parallel cluster, delivering compounds in parallel to the various compute nodes. Commercial database engines may be too ponderous, and a high speed indexing engine, such as Berkeley DB, may be a better choice. Furthermore, it may not be efficient to run one comparison per job, because the ramp up time of the cluster nodes could easily outstrip the amount of useful work. To work around this, it is necessary to process batches of compounds in each cluster job, aggregating the results into some kind of log file. A secondary process, to mine the log files and extract high scoring candidates, can then be run after the whole experiment has been run.

Accuracy

The aim of virtual screening is to identify molecules of novel chemical structure that bind to the macromolecular target of interest. Thus, success of a virtual screen is defined in terms of finding interesting new scaffolds rather than the total number of hits. Interpretations of virtual screening accuracy should therefore be considered with caution. Low hit rates of interesting scaffolds are clearly preferable over high hit rates of already known scaffolds.

Most tests of virtual screening studies in the literature are retrospective. In these studies, the performance of a VS technique is measured by its ability to retrieve a small set of previously known molecules with affinity to the target of interest (active molecules or just actives) from a library containing a much higher proportion of assumed inactives or decoys. By contrast, in prospective applications of virtual screening, the resulting hits are subjected to experimental confirmation (e.g., IC50 measurements). There is consensus that retrospective benchmarks are not good predictors of prospective performance and consequently only prospective studies constitute conclusive proof of the suitability of a technique for a particular target.[19][20][21][22]

Application to drug discovery

Virtual screening is a very useful application when it comes to identifying hit molecules as a beginning for medicinal chemistry. As the virtual screening approach begins to become a more vital and substantial technique within the medicinal chemistry industry the approach has had an expeditious increase.[23]

Ligand-based methods

While not knowing the structure trying to predict how the ligands will bind to the receptor. With the use of pharmacophore features each ligand identified donor, and acceptors. Equating features are overlaid, however given it is unlikely there is a single correct solution.[1]

Pharmacophore models

This technique is used when merging the results of searches by using unlike reference compounds, same descriptors and coefficient, but different active compounds. This technique is beneficial because it is more efficient than just using a single reference structure along with the most accurate performance when it comes to diverse actives.[1]

Pharmacophore is an ensemble of steric and electronic features that are needed to have an optimal supramolecular interaction or interactions witha biological target structure in order to precipitate it's biological response. Choose a representative as a set of actives, most methods will look for similar bindings. It is preferred to have multiple rigid molecules and the ligands should be diversified, in other words ensure to have different features that don't occur during the binding phase.[1]

Structure

Build a compound predictive model based on known active and known inactive knowledge. QSAR's (Quantitative-Structure Activity Relationship) which is restricted to a small homogenous dataset. SAR's (Structure Activity Relationship) where data is treated qualitatively and can be used with structural classes and more than one binding mode. Models prioritize compounds for lead discovery.[1]

Machine Learning

In order to use Machine Learning for this model of Virtual Screening there must be a training set with known active and known inactive compounds. There also is a model of activity that then is computed by way of substructural analysis, recursive partitioning, support vector machines, k nearest neighbors ad neural networks. The final step is finding the probability that a compound is active and then ranking each compound based on its probability of being active.[1]

Substructural Analysis in Machine Learning

The first Machine Learning model used on large datasets is the Substructure Analysis that was created in 1973. Each fragment substructure make a continuous contribution an activity of specific type.[1] Substructure is a method that overcomes the difficulty of massive dimensionality when it comes to analyzing structures in drug design. An efficient substructure analysis is used for structures that have similarities to a multi-level building or tower. Geometry is used for numbering boundary joints for a given structure in the onset and towards the climax. When the method of special static condensation and substitutions routines are developed this method is proved to be more productive than the previous substructure analysis models.[24]

Recursive Partitioning

Recursively partitioning is method that creates a decision tree using qualitative data. Understanding the way rules break classes up with a low error of misclassification while repeating each step until no sensible splits can be found. However, recursive partitioning can have poor prediction ability potentially creating fine models at the same rate.[1]

Structure Based Methods Known Protein Ligand Docking

Ligand can bind into an active site within a protein by using a docking search algorithm, and scoring function in order to identify the most likely cause for an individual ligand while assigning a priority order.

The Last Human

June 5, 2002 by Gregory Stock
Original link:  http://www.kurzweilai.net/the-last-human
Originally published April 2002. Excerpt from Redesigning Humans: Our Inevitable Genetic Future. Published on KurzweilAI.net June 5, 2002.
 
We are on the cusp of profound biological change, poised to transcend our current form and character on a journey to destinations of new imagination. The arrival of safe, reliable germline technology will signal the beginning of human self-design. Progressive self-transformation could change our descendants into something sufficiently different from our present selves to not be human in the sense we use the term now. But the ultimate question of our era is whether the cutting edge of life is destined to shift from its present biological substrate — the carbon and other organic materials of our flesh — to that of silicon and its ilk, as proposed by leading artificial-intelligence theorists such as Hans Moravec and Ray Kurzweil.


God and Nature first made us what we are, and then out of our own created genius we make ourselves what we want to be . . .Let the sky and God be our limit and Eternity our measurement.
-Marcus Garvey (1887-1940)
We know that Homo sapiens is not the final word in primate evolution, but few have yet grasped that we are on the cusp of profound biological change, poised to transcend our current form and character on a journey to destinations of new imagination.

At first glance, the very notion that we might become more than "human" seems preposterous. After all, we are still biologically identical in virtually every respect to our cave-dwelling ancestors. But this lack of change is deceptive. Never before have we had the power to manipulate human genetics to alter our biology in meaningful, predictable ways.

Bioethicists and scientists alike worry about the consequences of coming genetic technologies, but few have thought through the larger implications of the wave of new developments arriving in reproductive biology. Today in vitro fertilization is responsible for fewer than 1 percent of births in the United States; embryo selection numbers only in the hundreds of cases; cloning and human genetic modification still lie ahead. But give these emerging technologies a decade and they will be the cutting edge of human biological change.

These developments will write a new page in the history of life, allowing us to seize control of our evolutionary future. Our coming ability to choose our children’s genes will have immense social impact and raise difficult ethical dilemmas. Biological enhancement will lead us into unexplored realms, eventually challenging our basic ideas about what it means to be human.

Some imagine we will see the perils, come to our senses, and turn away from such possibilities. But when we imagine Prometheus stealing fire from the gods, we are not incredulous or shocked by his act. It is too characteristically human. To forgo the powerful technologies that genomics and molecular biology are bringing would be as out of character for humanity as it would be to use them without concern for the dangers they pose. We will do neither. The question is no longer whether we will manipulate embryos, but when, where, and how.

We have already felt the impact of previous advances in reproductive technology. Without the broad access to birth control that we take so for granted, the populations of Italy, Japan, and Germany would not be shrinking; birth rates in the developing world would not be falling. These are major shifts, yet unlike the public response to today’s high-tech developments, no impassioned voices protest birth control as an immense and dangerous experiment with our genetic future. Those opposing family planning seem more worried about the immorality of recreational sex than about human evolution.

In this book, we will examine the emerging reproductive technologies for selecting and altering human embryos. These developments, culminating in germline engineering — the manipulation of the genetics of egg or sperm (our "germinal" cells) to modify future generations — will have large consequences. Already, procedures that influence the germline are routine in labs working on fruit flies and mice, and researchers have done early procedures on nonhuman primates. Direct human germline manipulations may still be a decade or two away, but methods of choosing specific genes in an embryo are in use today to prevent disease, and sophisticated methods for making broader choices are arriving every year, bringing with them a taste of the ethical and social questions that will accompany comprehensive germline engineering.

The arrival of safe, reliable germline technology will signal the beginning of human self-design. We do not know where this development will ultimately take us, but it will transform the evolutionary process by drawing reproduction into a highly selective social process that is far more rapid and effective at spreading successful genes than traditional sexual competition and mate selection.

Human cloning has been a topic of passionate debate recently, but germline engineering and embryo selection have implications that are far more profound. When cloning becomes safe and reliable enough to use in humans — which is clearly not yet the case — it will be inherently conservative, if not extremely so. It will bring no new genetic constitutions into being, but will create genetic copies of people who already exist. The idea of a delayed identical twin is strange and unfamiliar, but not earthshattering. Most of us have met identical twins. They are very similar, yet different.

Dismissal of technology’s role in humanity’s genetic future is common even among biologists who use advanced technologies in their work. Perhaps the notion that we will control our evolutionary future seems too audacious. Perhaps the idea that humans might one day differ from us in fundamental ways is too disorienting. Most mass-media science fiction doesn’t challenge our thinking about this either. One of the last major sci-fi movies of the second millennium was The Phantom Menace, George Lucas’s 1999 prequel to Star Wars. Its vision of human biological enhancement was simple: there won’t be any. Lucas reveled in special effects and fantastical life forms, but altered us not a jot. Despite reptilian sidekicks with pedestal eyes and hard-bargaining insectoids that might have escaped from a Raid commercial, the film’s humans were no different from us. With the right accent and a coat and tie, the leader of the Galactic Republic might have been the president of France.

Such a vision of human continuity is reassuring. It lets us imagine a future in which we feel at home. Space pods, holographic telephones, laser pistols, and other amazing gadgets are enticing to many of us, but pondering a time when humans no longer exist is another story, one far too alien and unappealing to arouse our dramatic sympathies. We’ve seen too many apocalyptic images of nuclear, biological, and environmental disaster to think that the path to human extinction could be anything but horrific.

Yet the road to our eventual disappearance might be paved not by humanity’s failure but by its success. Progressive self- transformation could change our descendants into something sufficiently different from our present selves to not be human in the sense we use the term now. Such an occurrence would more aptly be termed a pseudoextinction, since it would not end our lineage. Unlike the saber-toothed tiger and other large mammals that left no descendants when our ancestors drove them to extinction, Homo sapiens would spawn its own successors by fast-forwarding its evolution.

Some disaster, of course, might derail our technological advance, or our biology might prove too complex to rework. But our recent deciphering of the human genome (the entirety of our genetic constitution) and our massive push to unravel life’s workings suggest that modification of our biology is far nearer to reality than the distant space travel we see in science fiction movies. Moreover, we are unlikely to achieve the technology to flit around the galaxy without being able to breach our own biology as well. The Human Genome Project is only a beginning.

Considering the barrage of press reports about the project, we naturally wonder how much is hype. Extravagant metaphor has not been lacking. We are deciphering the "code of codes," reading the book of life," looking at the "holy grail of human biology." It is reminiscent of the enthusiasm that attended Neil Armstrong’s 1969 walk on the moon. Humanity seemed poised to march toward the stars, but 2001 has come and gone, and there has been no sentient computer like HAL, no odyssey to the moons of Jupiter. Thirty years from now, however, I do not think we will look back at the Human Genome Project with a similar wistful disappointment. Unlike outer space, genetics is at our core, and as we learn to manipulate it, we are learning to manipulate ourselves.

Well before this new millennium’s close, we will almost certainly change ourselves enough to become much more than simply human. In this book, I will explore the nature and meaning of these coming changes, place them within the larger context of our rapid progress in biology and technology, and examine the social and ethical implications of the first tentative steps we are now taking.

Many bioethicists do not share my perspective on where we are heading. They imagine that our technology might become potent enough to alter us, but that we will turn away from it and reject human enhancement. But the reshaping of human genetics and biology does not hinge on some cadre of demonic researchers hidden away in a lab in Argentina trying to pick up where Hitler left off. The coming possibilities will be the inadvertent spinoff of mainstream research that virtually everyone supports. Infertility, for example, is a source of deep pain for millions of couples. Researchers and clinicians working on in vitro fertilization (IVF) don’t think much about future human evolution, but nonetheless are building a foundation of expertise in conceiving, handling, testing, and implanting human embryos, and this will one day be the basis for the manipulation of the human species. Already, we are seeing attempts to apply this knowledge in highly controversial ways: as premature as today’s efforts to clone humans may be, they would be the flimsiest of fantasies if they could not draw on decades of work on human IVF.

Similarly, in early 2001 more than five hundred gene-therapy trials were under way or in review throughout the world. The researchers are trying to cure real people suffering from real diseases and are no more interested in the future of human evolution than the IVF researchers. But their progress toward inserting genes into adult cells will be one more piece of the foundation for manipulating human embryos.

Not everything that can be done should or will be done, of course, but once a relatively inexpensive technology becomes feasible in thousands of laboratories around the world and a sizable fraction of the population sees it as beneficial, it will be used.

Erewhon, the brilliant 1872 satire by Samuel Butler, contains a scene that suggests what would be needed to stop the coming reworking of human biology. Erewhon is a civilized land with archaic machines, the result of a civil war won by the "anti-machinists" five centuries before the book’s story takes place. After its victory, this faction outlawed all further mechanical progress and destroyed all improvements made in the previous three centuries. They felt that to do otherwise would be suicide. "Reflect upon the extraordinary advance which machines have made during the last few hundred years," wrote their ancient leader, "and note how slowly the animal and vegetable kingdoms are advancing . . . I fear none of the existing machines; what I fear is the extraordinary rapidity at which they are becoming something very different to what they are at present . . . Though our rebellion against their infant power will cause infinite suffering . . . we must [otherwise see] ourselves gradually superseded by our own creatures until we rank no higher in comparison with them, than the beasts of the field with ourselves."

Butler would no doubt have chuckled at his own prescience had he been able to watch the special-purpose IBM computer Deep Blue defeat world chess champion Garry Kasparov in May 1997.We are at a similar juncture with our early steps toward human genetic manipulation. To "protect" ourselves from the future reworking of our biology would require more than an occasional restriction; it would demand a research blockade of molecular genetics or even a general rollback of technology. That simply won’t occur, barring global bio- catastrophe and a bloody victory by today’s bio-Luddites.

One irony of humanity’s growing power to shape its own evolution is the identity of the architects. In 1998, I spoke at a conference on mammalian cloning in Washington, D.C., and met Ian Wilmut, the Scottish scientist whose cloning of Dolly had created such a furor the previous year. Affronted by my relative lack of concern about the eventual cloning of humans, he vehemently insisted that the idea was abhorrent and that I was irresponsible to say that it would likely occur within a decade. His anger surprised me, considering that I was only speaking about human cloning, whereas he had played a role in the breakthrough that might bring it about. Incidentally, patent attorneys at the Roslin Institute, where the work occurred, and PPL Therapeutics, which funded the work, did not overlook the importance of human applications, since claims on their patent specifically cover them.

We cannot hold ourselves apart from the biological heritage that has shaped us. What we learn from fruit flies, mice, or even a cute Dorset ewe named Dolly is relevant to us. No matter how much the scientists who perform basic research in animal genetics and reproduction may sometimes deny it, their work is a critical part of the control we will soon have over our biology. Our desire to apply the results of animal research to human medicine, after all, is what drives much of the funding of this work.

Over the past hundred years, the trajectory of the life sciences traces a clear shift from description to understanding to manipulation. At the close of the nineteenth century, describing new biological attributes or species was still a good Ph.D. project for a student. This changed during the twentieth century, and such observations became largely a means for understanding the workings of biology. That too is now changing, and in the first half of the twenty-first century, biological understanding will likely become less an end in itself than a means to manipulate biology. In one century, we have moved from observing to understanding to engineering. Early Tinkering The best gauge of how far we will go in manipulating our genetics and that of our children is not what we say to pollsters, but what we are doing in those areas in which we already can modify our biology. On August 2, 1998, Marco Pantani cycled along the Champs Élysées to win the eighty-fifth Tour de France, but the race’s real story was the scandal over performance enhancement — which, of course, means drugs.

The banned hormone erythropoietin was at the heart of this particular chapter in the ongoing saga of athletic performance enhancement. By raising the oxygen-carrying capacity of red blood cells, the drug can boost endurance by 10 to 15 percent. Early in the race, a stash of it was found in the car of the masseur of the Italian team Festina — one of the world’s best — and after an investigation the entire team was booted from the race. A few days later, more erythropoietin was found, this time in the possession of one of the handlers of the Dutch team, and several of its cyclists were kicked out. As police raids intensified, five Spanish teams and an Italian one quit in protest, leaving only fourteen of the original twenty-one teams.

The public had little sympathy for the cheaters, but a crowd of angry Festina supporters protested that their riders had been unfairly singled out, and the French minister of health insisted that doping had been going on since racing began. Two years later in a courtroom in Lille, the French sports icon Richard Virenque, five- time winner of the King of the Mountains jersey in the Tour de France, seemed to confirm as much when the president of the court asked him if he took doping products. "We don’t say doping," replied Virenque. "We say we’re ‘preparing for the race.’"

The most obvious problem with today’s performance-enhancing drugs — besides their being a way of cheating — is that they’re dangerous. And when one athlete uses them, others must follow suit to stay competitive. But more than safety is at issue. The concern is what sports will be like when competitors need medical pit crews. As difficult as the problem of doping is, it will soon worsen, because such drugs will become safer, more effective, and harder to detect.

Professional sports offers a preview of the spread of enhancement technology into other arenas. Sports may carry stronger incentives to cheat, and thus push athletes toward greater health risks, but the nonsporting world is not so different. A person working two jobs feels under pressure to produce, and so does a student taking a test or someone suffering the effects of growing old. When safe, reliable metabolic and physiological enhancers exist, the public will want them, even if they are illegal. To block their use will be far more daunting than today’s war on drugs. An antidrug commercial proclaiming "Dope is for dopes!" or one showing a frying egg with the caption "Your brain on drugs" would not persuade anyone to stop using a safe memory enhancer.

Aesthetic surgery is another budding field for enhancement. When we try to improve our appearance, the personal stakes are high because our looks are always with us. Knowing that the photographs of beautiful models in magazines are airbrushed does not make us any less self-conscious if we believe we have a smile too gummy, skin too droopy, breasts too small, a nose too big, a head too bald, or any other such "defects." Surgery to correct these nonmedical problems has been growing rapidly and spreading to an ever-younger clientele. Public approval of aesthetic surgery has climbed some 50 percent in the past decade in the United States. We may not be modifying our genes yet, but we are ever more willing to resort to surgery to hold back the most obvious (and superficial) manifestations of aging, or even simply to remodel our bodies. Nor is this only for the wealthy. In 1994, when the median income in the United States was around $38,000, two thirds of the 400,000 aesthetic surgeries were performed on those with a family income under $50,000, and health insurance rarely covered the procedures. Older women who have subjected themselves to numerous face-lifts but can no longer stave off the signs of aging are not a rarity. But the tragedy is not so much that these women fight so hard to deny the years of visible decline, but that their struggle against life’s natural ebb ultimately must fail. If such a decline were not inevitable, many people would eagerly embrace pharmaceutical or genetic interventions to retard aging.

The desire to triumph over our own mortality is an ancient dream, but it hardly stands alone. Whether we look at today’s manipulations of our bodies by face-lifts, tattoos, pierced ears, or erythropoietin, the same message rings loud and clear: if medicine one day enables us to manipulate our biology in appealing ways, many of us will do so — even if the benefits are dubious and the risks not insignificant. To most people, the earliest adopters of these technologies will seem reckless or crazy, but are they so different from the daredevil test pilots of jet aircraft in the 1950s? Virtually by definition, early users believe that the possible gains from their bravado justify the risks. Otherwise, they would wait for flawed procedures to be discarded, for technical glitches to be worked through, for interventions to become safer and more predictable.

In truth, as long as people compete with one another for money, status, and mates, as long as they look for ways to display their worth and uniqueness, they will look for an edge for themselves and their children.

People will make mistakes with these biological manipulations. People will abuse them. People will worry about them. But as much could be said about any potent new development. No governmental body will wave some legislative wand and make advanced genetic and reproductive technologies go away, and we would be foolish to want this. Our collective challenge is not to figure out how to block these developments, but how best to realize their benefits while minimizing our risks and safeguarding our rights and freedoms. This will not be easy.

Our history is not a tale of self-restraint. Ten thousand years ago, when humans first crossed the Bering Strait to enter the Americas, they found huge herds of mammoths and other large mammals. In short order, these Clovis peoples, named for the archaeological site in New Mexico where their tools were first identified, used their skill and weaponry to drive them to extinction. This was no aberration: the arrival of humans in Australia, New Zealand, Madagascar, Hawaii, and Easter Island brought the same slaughter of wildlife. We may like to believe that primitive peoples lived in balance with nature, but when they entered new lands, they reshaped them in profound, often destructive ways. Jared Diamond, a professor of physiology at the UCLA School of Medicine and an expert on how geography and environment have affected human evolution, has tried to reconcile this typical pattern with the rare instances in which destruction did not occur. He writes that while "small, long- established egalitarian societies can evolve conservationist practices, because they’ve had plenty of time to get to know their local environment and to perceive their own self-interest," these practices do not occur when a people suddenly colonizes an unfamiliar environment or acquires a potent new technology.

Our technology is evolving so rapidly that by the time we begin to adjust to one development, another is already surpassing it. The answer would seem to be to slow down and devise the best course in advance, but that notion is a mirage. Change is accelerating, not slowing, and even if we could agree on what to aim for, the goal would probably be unrealistic. Complex changes are occurring across too broad a front to chart a path. The future is too opaque to foresee the eventual impacts of important new technologies, much less whole bodies of knowledge like genomics (the study of genomes). No one understood the powerful effects of the automobile or television at its inception. Few appreciated that our use of antibiotics would lead to widespread drug resistance or that improved nutrition and public health in the developing world would help bring on a population explosion. Our blindness about the consequences of new reproductive technologies is nothing new, and we will not be able to erase the uncertainty by convening an august panel to think through the issues.

No shortcut is possible. As always, we will have to earn our knowledge by using the technology and learning from the problems that arise. Given that some people will dabble in the new procedures as soon as they become even remotely accessible, our safest path is to not drive early explorations underground. What we learn about such technology while it is imperfect and likely to be used by only a small number of people may help us figure out how to manage it more wisely as it matures. Genes and Dreams James Watson, codiscoverer of the structure of DNA, cowinner of the Nobel Prize, and first director of the Human Genome Project, is arguably the most famous biologist of our times. The double-helical structure of DNA that he and Francis Crick described in 1953 has become the universally recognized symbol of a scientific dawn whose brightness we have barely begun to glimpse. In 1998, I was the moderator of a panel on which he sat with a half-dozen other leading molecular biologists, including Leroy Hood, the scientist who developed the first automated DNA sequencer, and French Anderson, the father of human gene therapy. The topic was human germline engineering, and the audience numbered about a thousand, mostly nonscientists. Anderson intoned about the moral distinction between human therapy and enhancement and laid out a laundry list of constraints that would have to be met before germline interventions would be acceptable. The seventy-year-old Watson sat quietly, his thinly tufted head lolled back as though he were asleep on a bus, but he was wide awake, and later shot an oblique dig, complaining about "fundamentalists from Tulsa, Oklahoma," which just happens to be where Anderson grew up. Watson summed up his own view with inimitable bluntness: "No one really has the guts to say it, but if we could make better human beings by knowing how to add genes, why shouldn’t we?"

Anderson, a wiry two-time national karate champion in the over-sixty category, is unused to being attacked as a conservative. Too often he has been the point man for gene therapy, receiving death threats for his pioneering efforts in the early 1990s and for a more recent attempt to win approval for fetal gene therapy. But the landscape has shifted. When organizing this symposium, a colleague and I worried about disruptive demonstrators, and could find only an occasional article outside academia on human germline therapy. A year later, stories about "designer children" were getting major play in Time and Newsweek, and today I frequently receive e-mail from high school students doing term papers on the subject.

Watson’s simple question, "If we could make better humans . . . why shouldn’t we?" cuts to the heart of the controversy about human genetic enhancement. Worries about the procedure’s feasibility or safety miss the point. No serious scientists advocate manipulating human genetics until such interventions are safe and reliable.

Why all the fuss, then? Opinions may differ about what risks are acceptable, but virtually every physician agrees that any procedure needs to be safe, and that any potential benefit needs to be weighed against the risks. Moreover, few prospective parents would seek even a moderately risky genetic enhancement for their child unless it was extremely beneficial, relatively safe, and unobtainable in an easier way. Actually, some critics, like Leon Kass, a well-known bioethicist at the University of Chicago who has long opposed such potential interventions, aren’t worried that this technology will fail, but that it will succeed, and succeed gloriously.

Their nightmare is that safe, reliable genetic manipulations will allow people to substantively enhance their biology. They believe that the use — and misuse — of this power will tear the fabric of our society. Such angst is particularly prevalent in western Europe, where most governments take a more conservative stand on the use of genetic technologies, even banning genetically altered foods. Stefan Winter, a physician at the University of Bonn and former vice president of the European Committee for Biomedical Ethics, says, "We should never apply germline gene interventions to human beings. The breeding of mankind would be a social nightmare from which no one could escape."

Given Hitler’s appalling foray into racial purification, European sensitivities are understandable, but they miss the bigger picture. The possibility of altering the genes of our prospective children is not some isolated spinoff of molecular biology but an integral part of the advancing technologies that culminate a century of progress in the biological sciences. We have spent billions to unravel our biology, not out of idle curiosity, but in the hope of bettering our lives. We are not about to turn away from this.

The coming advances will challenge our fundamental notions about the rhythms and meaning of life. Today, the "natural" setting for the vast majority of humans, especially in the economically developed world, bears no resemblance to the stomping grounds of our primitive ancestors, and nothing suggests that we will be any more hesitant about "improving" our own biology than we were about "improving" our environment. The technological powers we have hitherto used so effectively to remake our world are now potent and precise enough for us to turn them on ourselves. Breakthroughs in the matrixlike arrays called DNA chips, which may soon read thirty thousand genes at a pop; in artificial chromosomes, which now divide as stably as their naturally occurring cousins; and in bio-informatics, the use of computer-driven methodologies to decipher our genomes — all are paving the way to human genetic engineering and the beginnings of human biological design.

The birth of Dolly caused a stir not because of any real possibility of swarms of replicated humans, but because of what it signified. Anyone could see that one of the most intimate aspects of our lives — the passing of life from one generation to the next — might one day change beyond recognition. Suddenly the idea that we could hold ourselves apart and remain who we are and as we are while transforming the world around us seemed untenable.

Difficult ethical issues about our use of genetic and reproductive technologies have already begun to emerge. It is illegal in much of the world to test fetal gender for the purpose of sex selection, but the practice is commonplace. A study in Bombay reported that an astounding 7,997 out of 8,000 aborted fetuses were female, and in South Korea such abortions have become so widespread that some 65 percent of thirdborn children are boys, presumably because couples are unwilling to have yet a third girl. Nor is there any consensus among physicians about sex selection. In a recent poll, only 32 percent of doctors in the United States thought the practice should be illegal. Support for a ban ranged from 100 percent in Portugal to 22 percent in China. Although we may be uncomfortable with the idea of a woman aborting her fetus because of its gender, a culture that allows abortion at a woman’s sole discretion would require a major contortion to ban this sex selection.

Clearly, these technologies will be virtually impossible to control. As long as abortion and prenatal tests are available, parents who feel strongly about the sex of their child will use these tools. Such practices are nothing new. In nineteenth-century India, the British tried to stop female infanticide among high-caste Indians and failed. Modern technology, at least in India, may merely have substituted abortion for infanticide.

Sex selection highlights an important problem that greater control over human reproduction could bring. Some practices that seem unthreatening when used by any particular individual could become very challenging if they became widespread. If almost all couples had boys, the shortage of girls would obviously be disastrous, but extreme scenarios of this sort are highly suspect because they ignore corrective forces that usually come into play.

Worry over potential sex imbalances is but one example of a general unease about embryo selection. Our choices about other aspects of our children’s genetics might create social imbalances too — for example, large numbers of children who conform to the media’s ideals of beauty. Such concerns multiply when we couple them with visions of a "slippery slope," whereby initial use, even if relatively innocuous, inevitably leads to ever more widespread and problematic future applications: as marijuana leads to cocaine, and social drinking to alcoholism, gender selection will lead to clusters of genetically enhanced superhumans who will dominate if not enslave us. If we accept such reasoning, the only way to avoid ultimate disaster is to avoid the route at the outset, and we clearly haven’t.

The argument that we should ban cloning and human germline therapy because they would reduce genetic diversity is a good example of the misuse of extrapolations of this sort. Even the birth of a whopping one million genetically altered children a year — more than ten times the total number of IVF births during the decade following the first such procedure in 1978 — would still be less than 1/100 of the babies born worldwide each year. The technology’s impact on society will be immense in many ways, but a consequential diminution of biological diversity is not worth worrying about.

To noticeably narrow the human gene pool in the decades ahead, the technology would have to be applied in a consistent fashion and used a hundred times more frequently than even the strongest enthusiasts hope for. Such widespread use could never occur unless great numbers of people embraced the technology or governments forced them to submit to it. The former could happen only if people came to view the technology as extraordinarily safe, reliable, and desirable; the latter only if our democratic institutions had already suffered assaults so grave that the loss of genetic diversity would be the least of our problems. While there are many valid philosophical, social, ethical, scientific, and religious concerns about embryo selection and the manipulation of the human germline, the loss of genetic diversity is not one of them. Flesh and Blood As we explore the implications of advanced reproductive technologies, we must keep in mind the larger evolutionary context of the changes now under way. At first glance, human reproduction mediated by instruments, electronics, and pharmaceuticals in a modern laboratory seems unnatural and perverted. We are flesh and blood; this is not our place. But by the same token, we should abandon our vast buzzing honeycombs of steel, fiber optics, and concrete. Manhattan and Shanghai bear no resemblance to the African veldt that bore us.

Cocooned in the new environments we have fashioned, we can easily forget our kinship to our animal ancestors, but roughly 98 percent of our gene sequences are the same as a chimpanzee’s, 85 percent are the same as a mouse’s, and more than 50 percent of a fruit fly’s genes have human homologues. The immense differences between us and the earth’s other living creatures are less a result of our genetic and physiological dissimilarities than of the massive cultural construct we inhabit. Understanding this is an important element in finding the larger meaning of our coming control of human genetics and reproduction. And if we are to understand the social construction that is the embodiment of the human enterprise and the source of its technology, we need to see its larger evolutionary context.

A momentous transition took place 700 million years ago when single cells came together to form multicellular life. All the plants and animals we see today are but variations on that single theme — multicellularity. We all share a common origin, a common biochemistry, a common genetics, which is why researchers can ferry a jellyfish gene into a rabbit to make the rabbit’s skin fluoresce under ultraviolet light, or use a mammalian growth-hormone gene to make salmon grow larger.

Today we are in the midst of a second and equally momentous evolutionary transition: the human-led fusion of life into a vast network of people, crops, animals, and machines. A whir of trade and telecommunications is binding our technological and biological creations into a vast social organism of planetary dimensions. And this entity’s emergent powers are expanding our individual potentials far beyond those of other primates.

This global matrix has taken form in only a few thousand years and grows ever tighter and more interconnected. The process started slowly among preliterate hunter-gatherers, but once humans learned to write, they began to accumulate knowledge outside their brains. Change began to accelerate. The storage capacity for information became essentially unlimited, even if sifting through that information on the tablets and scrolls where it resided was hard. Now, however, with the advent of the computer, the power to electronically manipulate and sort this growing body of information is speeding up to the point where such processing occurs nearly as easily as it previously did within our brains. With the amount of accessible information exploding on the Internet and elsewhere, small wonder that our technology is racing ahead.

The social organism we have created gives us not only the language, art, music, and religion that in so many ways define our humanity, but the capacity to remake our own form and character. The profound shifts in our lives and values in the past century are not some cultural fluke; they are the child of a larger transformation wrought by the diffusion of technology into virtually every aspect of our lives, by trade and instantaneous global telecommunications, and by the growing manipulation of the physical and biological worlds around us.

Critical changes, unprecedented in the long history of life, are under way. With the silicon chip we are making complex machines that rival life itself. With the space program we are moving beyond the thin planetary film that has hitherto constrained life. With our biological research we are taking control of evolution and beginning to direct it.

The coming challenges of human genetic enhancement are not going to melt away; they will intensify decade by decade as we continue to unravel our biology, our nature, and the physical universe. Humanity is moving out of its childhood and into a gawky, stumbling adolescence in which it must learn not only to acknowledge its immense new powers, but to figure out how to use them wisely. The choices we face are daunting, but putting our heads in the sand is not the solution.

Germline engineering embodies our deepest fears about today’s revolution in biology. Indeed, the technology is the ultimate expression of that revolution because it may enable us to remake ourselves. But the issue of human genetic enhancement, challenging as it is, may not be the most difficult possibility we face. Recent breakthroughs in biology could not have been made without the assistance of computerized instrumentation, data analysis, and communications. Given the blistering pace of computer evolution and the Hollywood plots with skin-covered cyborgs or computer chips embedded in people’s brains, we naturally wonder whether cybernetic developments that blur the line between human and machine will overshadow our coming ability to alter ourselves biologically.

The ultimate question of our era is whether the cutting edge of life is destined to shift from its present biological substrate — the carbon and other organic materials of our flesh — to that of silicon and its ilk, as proposed by leading artificial-intelligence theorists such as Hans Moravec and Ray Kurzweil. They believe that the computer will soon transcend us. To be the "last humans," in the sense that future humans will modify their biology sufficiently to differ from us in meaningful ways, seems tame compared to giving way to machines, as the Erewhonians so feared. Before we look more deeply at human biological enhancement and what it may bring, we must consider what truth these machine dreams contain.

"The Last Human" from Redesigning Humans: Our Inevitable Genetic Future by Gregory Stock. Copyright © 2002 by Gregory Stock. Reprinted by permission of Houghton Mifflin Company.

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...